qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC PATCH RDMA support v5: 03/12] comprehensive protoc


From: Michael S. Tsirkin
Subject: Re: [Qemu-devel] [RFC PATCH RDMA support v5: 03/12] comprehensive protocol documentation
Date: Sun, 14 Apr 2013 21:30:41 +0300

On Sun, Apr 14, 2013 at 12:40:10PM -0400, Michael R. Hines wrote:
> On 04/14/2013 12:03 PM, Michael S. Tsirkin wrote:
> >On Sun, Apr 14, 2013 at 10:27:24AM -0400, Michael R. Hines wrote:
> >>On 04/14/2013 07:59 AM, Michael S. Tsirkin wrote:
> >>>On Fri, Apr 12, 2013 at 04:43:54PM +0200, Paolo Bonzini wrote:
> >>>>Il 12/04/2013 13:25, Michael S. Tsirkin ha scritto:
> >>>>>On Fri, Apr 12, 2013 at 12:53:11PM +0200, Paolo Bonzini wrote:
> >>>>>>Il 12/04/2013 12:48, Michael S. Tsirkin ha scritto:
> >>>>>>>1.  You have two protocols already and this does not make sense in
> >>>>>>>version 1 of the patch.
> >>>>>>It makes sense if we consider it experimental (add x- in front of
> >>>>>>transport and capability) and would like people to play with it.
> >>>>>>
> >>>>>>Paolo
> >>>>>But it's not testable yet.  I see problems just reading the
> >>>>>documentation.  Author thinks "ulimit -l 10000000000" on both source and
> >>>>>destination is just fine.  This can easily crash host or cause OOM
> >>>>>killer to kill QEMU.  So why is there any need for extra testers?  Fix
> >>>>>the major bugs first.
> >>>>>
> >>>>>There's a similar issue with device assignment - we can't fix it there,
> >>>>>and despite being available for years, this was one of two reasons that
> >>>>>has kept this feature out of hands of lots of users (and assuming guest
> >>>>>has lots of zero pages won't work: balloon is not widely used either
> >>>>>since it depends on a well-behaved guest to work correctly).
> >>>>I agree assuming guest has lots of zero pages won't work, but I think
> >>>>you are overstating the importance of overcommit.  Let's mark the damn
> >>>>thing as experimental, and stop making perfect the enemy of good.
> >>>>
> >>>>Paolo
> >>>It looks like we have to decide, before merging, whether migration with
> >>>rdma that breaks overcommit is worth it or not.  Since the author made
> >>>it very clear he does not intend to make it work with overcommit, ever.
> >>>
> >>That depends entirely as what you define as overcommit.
> >You don't get to define your own terms.  Look it up in wikipedia or
> >something.
> >
> >>The pages do get unregistered at the end of the migration =)
> >>
> >>- Michael
> >The limitations are pretty clear, and you really should document them:
> >
> >1. run qemu as root, or under ulimit -l <total guest memory> on both source 
> >and
> >   destination
> >
> >2. expect that as much as that amount of memory is pinned
> >   and unvailable to host kernel and applications for
> >   arbitrarily long time.
> >   Make sure you have much more RAM in host or QEMU will get killed.
> >
> >To me, especially 1 is an unacceptable security tradeoff.
> >It is entirely fixable but we both have other priorities,
> >so it'll stay broken.
> >
> 
> I've modified the beginning of docs/rdma.txt to say the following:

It really should say this, in a very prominent place:

BUGS:
1. You must run qemu as root, or under
   ulimit -l <total guest memory> on both source and destination

2. Expect that as much as that amount of memory to be locked
   and unvailable to host kernel and applications for
   arbitrarily long time.
   Make sure you have much more RAM in host otherwise QEMU,
   or some other arbitrary application on same host, will get killed.

3. Migration with RDMA support is experimental and unsupported.
   In particular, please do not expect it to work across qemu versions,
   and do not expect the management interface to be stable.
   

> 
> $ cat docs/rdma.txt
> 
> ... snip ..
> 
> BEFORE RUNNING:
> ===============
> 
> Use of RDMA requires pinning and registering memory with the
> hardware. If this is not acceptable for your application or
> product, then the use of RDMA is strongly discouraged and you
> should revert back to standard TCP-based migration.

No one knows of should know what "pinning and registering" means.
For which applications and products is it appropriate?
Also, you are talking about current QEMU
code using RDMA for migration but say "RDMA" generally.

> Next, decide if you want dynamic page registration on the server-side.
> For example, if you have an 8GB RAM virtual machine, but only 1GB
> is in active use, then disabling this feature will cause all 8GB to
> be pinned and resident in memory. This feature mostly affects the
> bulk-phase round of the migration and can be disabled for extremely
> high-performance RDMA hardware using the following command:
> QEMU Monitor Command:
> $ migrate_set_capability chunk_register_destination off # enabled by default
> 
> Performing this action will cause all 8GB to be pinned, so if that's
> not what you want, then please ignore this step altogether.

This does not make it clear what is the benefit of disabling this
capability. I think it's best to avoid options, just use chunk
based always.
If it's here "so people can play with it" then please rename
it to something like "x-unsupported-chunk_register_destination"
so people know this is unsupported and not to be used for production.

> RUNNING:
> =======
> 
> ..... snip ...
> 
> I'll group this change into a future patch whenever the current patch
> gets pulled, and I will also update the QEMU wiki to make this point clear.
> 
> - Michael
> 
> 
> 


-- 
MST



reply via email to

[Prev in Thread] Current Thread [Next in Thread]