|From:||Michael R. Hines|
|Subject:||Re: [Qemu-devel] [PATCH v6 00/11] rdma: migration support|
|Date:||Thu, 09 May 2013 13:20:27 -0400|
|User-agent:||Mozilla/5.0 (X11; Linux i686; rv:17.0) Gecko/20130106 Thunderbird/17.0.2|
Comments inline. FYI: please CC address@hidden,
because it helps me know when to scroll threw the bazillion qemu-devel emails.
I have things separated out into folders and rules, but a direct CC is better =)
On 05/03/2013 07:28 PM, Chegu Vinod wrote:
Can you give me more details about the configuration of your VM?
Is the QEMU monitor still responsive?
Can you capture a screenshot of the guest's console to see if there is a panic?
What kind of storage is attached to the VM?
That's a good question: The pin-all option should not be slowing down your VM to much as the VM should still be running before the migration_thread() actually kicks in and starts the migration.
I need more information on the configuration of your VM, guest operating system, architecture and so forth.......
And similarly as before whether or not QEMU is not responsive or whether or not it's the guest that's panicked.......
Also the act of pinning all the memory seems to "freeze" the guest. e.g. : For larger enterprise sized guests (say 128GB and higher) the guest is "frozen" is anywhere from nearly a minute (~50seconds) to multiple minutes as the guest size increases...which imo kind of defeats the purpose of live guest migration.
That's bad =) There must be a bug somewhere........ the largest VM I can create on my hardware is ~16GB - so let me give that a try and try to track down the problem.
For such a large VM, I would definitely recommend pinning because I'm assuming you have enough processors or a large enough application to actually *use* that much memory, which would suggest that even after the bulk phase round of the migration has already completed that your VM is probably going to remain to be pretty busy.
It's just a matter of me tracking down what's causing the freeze and fixing it........ I'll look into it right now on my machine.
I had no idea.......very interesting.
Wow, I didn't know that either. Perhaps this must be causing the entire QEMU process and its threads to seize up.
It may be necessary to run the pinning command *outside* of QEMU's I/O lock in a separate thread if it's really that much overhead.
Thanks a lot for pointing this out.........
|[Prev in Thread]||Current Thread||[Next in Thread]|