[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 16/17] migration: adjust migration_thread() proc

From: Lei Li
Subject: Re: [Qemu-devel] [PATCH 16/17] migration: adjust migration_thread() process for page flipping
Date: Tue, 26 Nov 2013 21:53:48 +0800
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/17.0 Thunderbird/17.0

On 11/26/2013 08:54 PM, Paolo Bonzini wrote:
Il 26/11/2013 13:03, Lei Li ha scritto:
+            if (pending_size && pending_size >= max_size &&
+                !runstate_needs_reset()) {
I'm not sure why you need this.
The adjustment here is to avoid the iteration stage for page flipping.
Because pending_size = ram_save_remaining() * TARGET_PAGE_SIZE which is
not 0 and pending_size > max_size (0) at start.
It's still not clear to me that avoiding the iteration stage is

The purpose of it is not just for optimization, but to avoid the
iteration for better alignment.

The current flow of page flipping basically has two stages:

1) ram_save_setup stage, it will send all the bytes in this stages
   to destination, and send_pipefd by ram_control_before_iterate
   at the end of it.
2) ram_save_complete, it will start to transmit the ram page
   in ram_save_block, and send the device state after that.

So it needs to adjust the current migration process to avoid
the iteration stage.

necessary.  I think it's just an optimization to avoid scanning the
bitmap, but:

(1) Juan's bitmap optimization will make this mostly unnecessary

(2) getting good downtime from page flipping will require postcopy anyway.

And you said 'This is a bit ugly but I understand the need. Perhaps "&&
!runstate_needs_reset()" like below?' :)
Oops.  I might have said this before thinking about postcopy and/or
before seeing the benchmark results from Juan's patches.  If this part
of the patch is just an optimization, I'd rather leave it out for now.

I am afraid that page flipping can not proceed correctly without this..

Thanks for putting up with me. :)



reply via email to

[Prev in Thread] Current Thread [Next in Thread]