|
From: | Yoshiaki Tamura |
Subject: | Re: [Qemu-devel] [PATCH 00/15] Make migration work with hotplug |
Date: | Fri, 25 Jun 2010 11:01:03 +0900 |
User-agent: | Mozilla/5.0 (Windows; U; Windows NT 5.1; ja; rv:1.9.1.10) Gecko/20100512 Thunderbird/3.0.5 |
Alex Williamson wrote:
On Thu, 2010-06-24 at 09:04 -0600, Alex Williamson wrote:On Thu, 2010-06-24 at 15:02 +0900, Yoshiaki Tamura wrote:Hi Alex, Is there additional overhead to save rams introduce by this series? If so, how much?Yes, there is overhead, but it's typically quite small. If I migrate a 1G VM immediately after I boot to a login prompt (lots of zero pages), I get an overhead of 0.000076%. That's only 226 extra bytes over the 297164995 bytes otherwise transferred. If I build a kernel on the guest and migrate during the compilation, the overhead is 0.000019%. The overhead is tiny largely due to patch 12/15, which avoids sending the block name if we're working within the same block as sent previously. If I disable this optimization, the overhead goes up to 0.93% after boot and 0.26% during a kernel compile.
Thank your for the detailed numbers and analysis!If the overhead is at this level, I think it's worth introducing to support migration with hot plug devices.
Note that an x86 VM does a separate qemu_ram_alloc for memory above 4G, which means in bigger VMs we may end up needing to resend the ramblock name once in a while as we bounce between above and below 4G. Worst case for this could match the 0.26% above, but in my testing during a kernel compile, this seems to increase the overhead to 0.000026% on a 6G VM.
If we run a program which bounce between the region intentionally, we should get a number close to 0.26%, but the overhead is still low enough, and it shouldn't be a big problem.
I don't see any reason why we couldn't allocate all the ram in a single qemu_ram_alloc call, so I'll add another patch to make that change (which will also shorten the name to "pc.ram" for even less overhead ;). Thanks,
Hmm. I didn't know about the qemu_ram_alloc over 4G issue.If there isn't any reason, how about submitting a patch to fix it besides this series?
FWIW, with this change, my migration during kernel compile on the 6G VM seems to be running just at 0.000019%-0.000020%, so that eliminates the penalty for bigger memory VMs. Thanks,
It make sense:-) Thanks, Yoshi
Alex
[Prev in Thread] | Current Thread | [Next in Thread] |