qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v2 3/3] virtio-balloon: add a timer to limit the


From: Wei Wang
Subject: Re: [Qemu-devel] [PATCH v2 3/3] virtio-balloon: add a timer to limit the free page report waiting time
Date: Wed, 28 Feb 2018 18:37:07 +0800
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.7.0

On 02/27/2018 06:34 PM, Dr. David Alan Gilbert wrote:
* Wei Wang (address@hidden) wrote:
On 02/09/2018 08:15 PM, Dr. David Alan Gilbert wrote:
* Wei Wang (address@hidden) wrote:
This patch adds a timer to limit the time that host waits for the free
page hints reported by the guest. Users can specify the time in ms via
"free-page-wait-time" command line option. If a user doesn't specify a
time, host waits till the guest finishes reporting all the free page
hints. The policy (wait for all the free page hints to be reported or
use a time limit) is determined by the orchestration layer.
That's kind of a get-out; but there's at least two problems:
     a) With a timeout of 0 (the default) we might hang forever waiting
        for the guest; broken guests are just too common, we can't do
        that.
     b) Even if we were going to do that, you'd have to make sure that
        migrate_cancel provided a way out.
     c) How does that work during a savevm snapshot or when the guest is
        stopped?
     d) OK, the timer gives us some safety (except c); but how does the
        orchestration layer ever come up with a 'safe' value for it?
        Unless we can suggest a safe value that the orchestration layer
        can use, or a way they can work it out, then they just wont use
        it.

Hi Dave,

Sorry for my late response. Please see below:

a) I think people would just kill the guest if it is broken. We can also
change the default timeout value, for example 1 second, which is enough for
the free page reporting.
Remember that many VMs are automatically migrated without their being a
human involved; those VMs might be in the BIOS or Grub or shutting down at
the time of migration; there's no human to look at the VM.


OK, thanks for the sharing. I plan to take Michael's suggestion to make the optimization run in parallel with the migration thread. The optimization will be in its own thread, and the migration thread runs as usual (not stuck by the optimization e.g. when the optimization part doesn't return promptly in any case).

Best,
Wei



reply via email to

[Prev in Thread] Current Thread [Next in Thread]