qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] hw/virtio/balloon: Fixes for different host pag


From: Dr. David Alan Gilbert
Subject: Re: [Qemu-devel] [PATCH] hw/virtio/balloon: Fixes for different host page sizes
Date: Thu, 14 Apr 2016 12:47:58 +0100
User-agent: Mutt/1.5.24 (2015-08-30)

* Thomas Huth (address@hidden) wrote:

> That would mean a regression compared to what we have today. Currently,
> the ballooning is working OK for 64k guests on a 64k ppc host - rather
> by chance than on purpose, but it's working. The guest is always sending
> all the 4k fragments of a 64k page, and QEMU is trying to call madvise()
> for every one of them, but the kernel is ignoring madvise() on
> non-64k-aligned addresses, so we end up with a situation where the
> madvise() frees a whole 64k page which is also declared as free by the
> guest.

I wouldn't worry about migrating your fragmenet map; but I wonder if it
needs to be that complex - does the guest normally do something more sane
like do the 4k pages in order and so you've just got to track the last
page it tried rather than having a full map?

A side question is whether the behaviour that's seen by 
virtio_ballon_handle_output
is always actually the full 64k page;  it calls balloon_page once
for each message/element - but if all of those elements add back up to the full
page, perhaps it makes more sense to reassemble it there?

> I think we should either take this patch as it is right now (without
> adding extra code for migration) and later update it to the bitmap code
> by Jitendra Kolhe, or omit it completely (leaving 4k guests broken) and
> fix it properly after the bitmap code has been applied. But disabling
> the balloon code for 64k guests on 64k hosts completely does not sound
> very appealing to me. What do you think?

Yeh I agree; your existing code should work and I don't think we should
break 64k-on-64k.

Dave
> 
>  Thomas
> 
--
Dr. David Alan Gilbert / address@hidden / Manchester, UK



reply via email to

[Prev in Thread] Current Thread [Next in Thread]