[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC PATCH RDMA support v5: 03/12] comprehensive protoc

From: Michael R. Hines
Subject: Re: [Qemu-devel] [RFC PATCH RDMA support v5: 03/12] comprehensive protocol documentation
Date: Mon, 15 Apr 2013 09:08:01 -0400
User-agent: Mozilla/5.0 (X11; Linux i686; rv:17.0) Gecko/20130106 Thunderbird/17.0.2

On 04/15/2013 04:28 AM, Paolo Bonzini wrote:
Il 15/04/2013 03:06, Michael R. Hines ha scritto:
Next, decide if you want dynamic page registration on the server-side.
For example, if you have an 8GB RAM virtual machine, but only 1GB
is in active use, then disabling this feature will cause all 8GB to
be pinned and resident in memory. This feature mostly affects the
bulk-phase round of the migration and can be disabled for extremely
high-performance RDMA hardware using the following command:
QEMU Monitor Command:
$ migrate_set_capability chunk_register_destination off # enabled
by default

Performing this action will cause all 8GB to be pinned, so if that's
not what you want, then please ignore this step altogether.
This does not make it clear what is the benefit of disabling this
capability. I think it's best to avoid options, just use chunk
based always.
If it's here "so people can play with it" then please rename
it to something like "x-unsupported-chunk_register_destination"
so people know this is unsupported and not to be used for production.
Again, please drop the request for removing chunking.

Paolo already told me to use "x-rdma" - so that's enough for now.
You are adding a new command that's also experimental, so you must tag
it explicitly too.
The entire migration is experimental - which by extension makes the
capability experimental.
You still have to mark it as "x-".  Of course not "x-unsupported-", that
is a pleonasm.


Sure, I'm happy add another 'x'. I will submit a patch with all the new
changes as soon as the pull completes.

- Michael

reply via email to

[Prev in Thread] Current Thread [Next in Thread]