qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC PATCH RDMA support v5: 03/12] comprehensive protoc


From: Michael R. Hines
Subject: Re: [Qemu-devel] [RFC PATCH RDMA support v5: 03/12] comprehensive protocol documentation
Date: Sun, 14 Apr 2013 12:40:10 -0400
User-agent: Mozilla/5.0 (X11; Linux i686; rv:17.0) Gecko/20130106 Thunderbird/17.0.2

On 04/14/2013 12:03 PM, Michael S. Tsirkin wrote:
On Sun, Apr 14, 2013 at 10:27:24AM -0400, Michael R. Hines wrote:
On 04/14/2013 07:59 AM, Michael S. Tsirkin wrote:
On Fri, Apr 12, 2013 at 04:43:54PM +0200, Paolo Bonzini wrote:
Il 12/04/2013 13:25, Michael S. Tsirkin ha scritto:
On Fri, Apr 12, 2013 at 12:53:11PM +0200, Paolo Bonzini wrote:
Il 12/04/2013 12:48, Michael S. Tsirkin ha scritto:
1.  You have two protocols already and this does not make sense in
version 1 of the patch.
It makes sense if we consider it experimental (add x- in front of
transport and capability) and would like people to play with it.

Paolo
But it's not testable yet.  I see problems just reading the
documentation.  Author thinks "ulimit -l 10000000000" on both source and
destination is just fine.  This can easily crash host or cause OOM
killer to kill QEMU.  So why is there any need for extra testers?  Fix
the major bugs first.

There's a similar issue with device assignment - we can't fix it there,
and despite being available for years, this was one of two reasons that
has kept this feature out of hands of lots of users (and assuming guest
has lots of zero pages won't work: balloon is not widely used either
since it depends on a well-behaved guest to work correctly).
I agree assuming guest has lots of zero pages won't work, but I think
you are overstating the importance of overcommit.  Let's mark the damn
thing as experimental, and stop making perfect the enemy of good.

Paolo
It looks like we have to decide, before merging, whether migration with
rdma that breaks overcommit is worth it or not.  Since the author made
it very clear he does not intend to make it work with overcommit, ever.

That depends entirely as what you define as overcommit.
You don't get to define your own terms.  Look it up in wikipedia or
something.

The pages do get unregistered at the end of the migration =)

- Michael
The limitations are pretty clear, and you really should document them:

1. run qemu as root, or under ulimit -l <total guest memory> on both source and
   destination

2. expect that as much as that amount of memory is pinned
   and unvailable to host kernel and applications for
   arbitrarily long time.
   Make sure you have much more RAM in host or QEMU will get killed.

To me, especially 1 is an unacceptable security tradeoff.
It is entirely fixable but we both have other priorities,
so it'll stay broken.


I've modified the beginning of docs/rdma.txt to say the following:

$ cat docs/rdma.txt

... snip ..

BEFORE RUNNING:
===============

Use of RDMA requires pinning and registering memory with the
hardware. If this is not acceptable for your application or
product, then the use of RDMA is strongly discouraged and you
should revert back to standard TCP-based migration.

Next, decide if you want dynamic page registration on the server-side.
For example, if you have an 8GB RAM virtual machine, but only 1GB
is in active use, then disabling this feature will cause all 8GB to
be pinned and resident in memory. This feature mostly affects the
bulk-phase round of the migration and can be disabled for extremely
high-performance RDMA hardware using the following command:

QEMU Monitor Command:
$ migrate_set_capability chunk_register_destination off # enabled by default

Performing this action will cause all 8GB to be pinned, so if that's
not what you want, then please ignore this step altogether.

RUNNING:
=======

..... snip ...

I'll group this change into a future patch whenever the current patch
gets pulled, and I will also update the QEMU wiki to make this point clear.

- Michael







reply via email to

[Prev in Thread] Current Thread [Next in Thread]