qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC 00/29] postcopy+vhost-user/shared ram


From: Dr. David Alan Gilbert
Subject: Re: [Qemu-devel] [RFC 00/29] postcopy+vhost-user/shared ram
Date: Fri, 7 Jul 2017 18:26:06 +0100
User-agent: Mutt/1.8.3 (2017-05-23)

* Michael S. Tsirkin (address@hidden) wrote:
> On Fri, Jul 07, 2017 at 01:01:56PM +0100, Dr. David Alan Gilbert wrote:
> > > >    Take care of deadlocking; any thread in the client that
> > > >    accesses a userfault protected page can stall.
> > > 
> > > And it can happen under a lock quite easily.
> > > What exactly is proposed here?
> > > Maybe we want to reuse the new channel that the IOMMU uses.
> > 
> > There's no fundamental reason to get deadlocks as long as you
> > get it right; the qemu thread that processes the user-fault's
> > is a separate independent thread, so once it's going the client
> > can do whatever it likes and it will get woken up without
> > intervention.
> 
> You take a lock for the channel, then access guest memory.
> Then the thread that gets messages from qemu can't get
> on the channel to mark range as populated.

It doesn't need to get the message from qemu to know it's populated
though; qemu performs a WAKE ioctl on the userfaultfd to cause
it to wake, so there's no action needed by the client.
(If it does need to take a lock then ye we have a problem).

> > Some care is needed around the postcopy-end; reception of the
> > message that tells you to drop the userfault enables (which
> > frees anything that hasn't been woken) must be allowed to happen
> > for the postcopy complete;  we take care that QEMUs fault
> > thread lives on until that message is acknowledged.
> >
> > I'm more worried about how this will work in a full packet switch
> > when one vhost-user client for an incoming migration stalls
> > the whole switch unless care is taken about the design.
> > How do we figure out whether this is going to fly on a full stack?
> 
> It's performance though. Client could run in a separate
> thread for a while until migration finishes.
> We need to make sure there's explicit documentation
> that tells clients at what point they might block.

Right.

> > That's my main reason for getting this WIP set out here to
> > get comments.
> 
> What will happen if QEMU dies? Is there a way to unblock the client?

If the client can detect this and close it's userfaultfd then yes;
of course that detection has to be done in a thread that can't be being
blocked by anything related to the userfaultfd that it might be blocked
on.

> > > >    There's a nasty hack of a lock around the set_mem_table message.
> > > 
> > > Yes.
> > > 
> > > >    I've not looked at the recent IOMMU code.
> > > > 
> > > >    Some cleanup and a lot of corner cases need thinking about.
> > > > 
> > > >    There are probably plenty of unknown issues as well.
> > > 
> > > At the protocol level, I'd like to rename the feature to
> > > USER_PAGEFAULT. Client does not really know anything about
> > > copies, it's all internal to qemu.
> > > Spec can document that it's used by qemu for postcopy.
> > 
> > OK, tbh I suspect that using it for anything else would be tricky
> > without adding more protocol features for that other use case.
> > 
> > Dave
> 
> Why exactly? How does client have to know it's migration?

It's more the sequence I worry about; we're reliant on
making sure that the userfaultfd is registered with the RAM before
it's ever accessed, and we unregister at the end.
This all keys in with migration requesting registration at the right
point before loading the devices.

Dave

> -- 
> MST
--
Dr. David Alan Gilbert / address@hidden / Manchester, UK



reply via email to

[Prev in Thread] Current Thread [Next in Thread]