qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 11/18] vhost-user: add shutdown support


From: Marc-André Lureau
Subject: Re: [Qemu-devel] [PATCH 11/18] vhost-user: add shutdown support
Date: Wed, 13 Apr 2016 05:51:15 -0400 (EDT)

Hi

----- Original Message -----
> Hi Marc,
> 
> First of all, sorry again for late response!
> 
> Last time I tried with your first version, I found few issues related
> with reconnect, mainly on the acked_feautres lost. While checking your
> new code, I found that you've already solved that, which is great.
> 
> So, I tried harder this time, your patches work great, except that I
> found few nits.
> 
> On Fri, Apr 01, 2016 at 01:16:21PM +0200, address@hidden wrote:
> > From: Marc-André Lureau <address@hidden>
> ...
> > +Slave message types
> > +-------------------
> > +
> > + * VHOST_USER_SLAVE_SHUTDOWN:
> > +      Id: 1
> > +      Master payload: N/A
> > +      Slave payload: u64
> > +
> > +      Request the master to shutdown the slave. A 0 reply is for
> > +      success, in which case the slave may close all connections
> > +      immediately and quit.
> 
> Assume we are using ovs + dpdk here, that we could have two
> vhost-user connections. While ovs tries to initiate a restart,
> it might unregister the two connections one by one. In such
> case, two VHOST_USER_SLAVE_SHUTDOWN request will be sent,
> and two replies will get. Therefore, I don't think it's a
> proper ask here to let the backend implementation to do quit
> here.
> 

On success reply, the master sent all the commands to finish the connection. So 
the slave must flush/finish all pending requests first. I think this should be 
enough, otherwise we may need a new explicit message?

> 
> >  
> >      switch (msg.request) {
> > +    case VHOST_USER_SLAVE_SHUTDOWN: {
> > +        uint64_t success = 1; /* 0 is for success */
> > +        if (dev->stop) {
> > +            dev->stop(dev);
> > +            success = 0;
> > +        }
> > +        msg.payload.u64 = success;
> > +        msg.size = sizeof(msg.payload.u64);
> > +        size = send(u->slave_fd, &msg, VHOST_USER_HDR_SIZE + msg.size, 0);
> > +        if (size != VHOST_USER_HDR_SIZE + msg.size) {
> > +            error_report("Failed to write reply.");
> > +        }
> > +        break;
> 
> You might want to remove the slave_fd from watch list? We
> might also need to close slave_fd here, assuming that we
> will no longer use it when VHOST_USER_SLAVE_SHUTDOWN is
> received?

Makes sense, I will change that in next update.

> I'm asking because I found a seg fault issue sometimes,
> due to opaque is NULL.
>

I would be interested to see the backtrace or have a reproducer.

thanks



reply via email to

[Prev in Thread] Current Thread [Next in Thread]