qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v7 00/12] rdma: migration support


From: Michael R. Hines
Subject: Re: [Qemu-devel] [PATCH v7 00/12] rdma: migration support
Date: Thu, 13 Jun 2013 10:55:24 -0400
User-agent: Mozilla/5.0 (X11; Linux i686; rv:17.0) Gecko/20130329 Thunderbird/17.0.5

On 06/13/2013 10:26 AM, Chegu Vinod wrote:

1. start QEMU with the lock option *first*
2. Then enable x-rdma-pin-all
3. Then perform the migration

What happens here? Does pinning "in advance" help you?

Yes it does help by avoiding the freeze time at the start of the pin-all migration.

I already mentioned about this in in my earlier responses as an option to consider for larger guests (https://lists.gnu.org/archive/html/qemu-devel/2013-05/msg00435.html).

But pinning all of guest memory has a few drawbacks...as you may already know.

Just to be sure I just double checked it again with your v7 bits . Started a 64GB/10VCPU guest (started qemu with the"-realtime mlock=on" option) and as expected the guest startup took about 20 seconds longer (i.e. time taken to mlock the 64GB of guest RAM) but the pin-all migration started fine...i.e. didn't observe any freezes at the start of the migration


(CC-ing qemu-devel).

OK, that's good to know. This means that we need to bringup the mlock() problem as a "larger" issue in the linux community instead of the QEMU community.

In the meantime, how about I make update to the RDMA patch which does the following:

1. Solution #1:
       If user requests "x-rdma-pin-all", then
            If QEMU has enabled "-realtime mlock=on"
                   Then, allow the capability
            Else
                  Disallow the capability

2. Solution #2: Create NEW qemu monitor command which locks memory *in advance* before the migrate command occurs, to clearly indicate to the user that the cost of locking memory must be paid before the migration starts.

Which solution do you prefer? Or do you have alternative idea?


https://lists.gnu.org/archive/html/qemu-devel/2013-04/msg04161.html

Again this is a generic linux mlock/clearpage related issue and not directly related to your changes.


Do you have any ideas on how linux can be improved to solve this?
Is there any ongoing work that you know of on mlock() performance?

Is there, perhaps, some way for linux to "parallelize" the mlock()/clearpage operation?

- Michael




reply via email to

[Prev in Thread] Current Thread [Next in Thread]