qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC PATCH RDMA support v5: 03/12] comprehensive protoc


From: Michael S. Tsirkin
Subject: Re: [Qemu-devel] [RFC PATCH RDMA support v5: 03/12] comprehensive protocol documentation
Date: Sun, 14 Apr 2013 21:51:16 +0300

On Sun, Apr 14, 2013 at 10:31:20AM -0400, Michael R. Hines wrote:
> On 04/14/2013 04:28 AM, Michael S. Tsirkin wrote:
> >On Fri, Apr 12, 2013 at 09:47:08AM -0400, Michael R. Hines wrote:
> >>Second, as I've explained, I strongly, strongly disagree with unregistering
> >>memory for all of the aforementioned reasons - workloads do not
> >>operate in such a manner that they can tolerate memory to be
> >>pulled out from underneath them at such fine-grained time scales
> >>in the *middle* of a relocation and I will not commit to writing a solution
> >>for a problem that doesn't exist.
> >Exactly same thing happens with swap, doesn't it?
> >You are saying workloads simply can not tolerate swap.
> >
> >>If you can prove (through some kind of anaylsis) that workloads
> >>would benefit from this kind of fine-grained memory overcommit
> >>by having cgroups swap out memory to disk underneath them
> >>without their permission, I would happily reconsider my position.
> >>
> >>- Michael
> >This has nothing to do with cgroups directly, it's just a way to
> >demonstrate you have a bug.
> >
> 
> If your datacenter or your cloud or your product does not want to
> tolerate page registration, then don't use RDMA!
> 
> The bottom line is: RDMA is useless without page registration. Without
> it, the performance of it will be crippled. If you define that as a bug,
> then so be it.
> 
> - Michael

No one cares if you do page registration or not.  ulimit -l 10g is the
problem.  You should limit the amount of locked memory.
Lots of good research went into making RDMA go fast with limited locked
memory, with some success. Search for "registration cache" for example.

-- 
MST



reply via email to

[Prev in Thread] Current Thread [Next in Thread]