[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Qemu-devel] [virtio-comment] [PATCH] *** Vhost-pci RFC v2 ***
From: |
Michael S. Tsirkin |
Subject: |
Re: [Qemu-devel] [virtio-comment] [PATCH] *** Vhost-pci RFC v2 *** |
Date: |
Tue, 30 Aug 2016 14:10:45 +0300 |
On Tue, Aug 30, 2016 at 10:08:01AM +0000, Wang, Wei W wrote:
> On Monday, August 29, 2016 11:25 PM, Stefan Hajnoczi wrote:
> > To: Wang, Wei W <address@hidden>
> > Cc: address@hidden; address@hidden; virtio-
> > address@hidden; address@hidden; address@hidden
> > Subject: Re: [virtio-comment] [PATCH] *** Vhost-pci RFC v2 ***
> >
> > On Mon, Jun 27, 2016 at 02:01:24AM +0000, Wang, Wei W wrote:
> > > On Sun 6/19/2016 10:14 PM, Wei Wang wrote:
> > > > This RFC proposes a design of vhost-pci, which is a new virtio device
> > > > type.
> > > > The vhost-pci device is used for inter-VM communication.
> > > >
> > > > Changes in v2:
> > > > 1. changed the vhost-pci driver to use a controlq to send
> > > > acknowledgement
> > > > messages to the vhost-pci server rather than writing to the device
> > > > configuration space;
> > > >
> > > > 2. re-organized all the data structures and the description
> > > > layout;
> > > >
> > > > 3. removed the VHOST_PCI_CONTROLQ_UPDATE_DONE socket message,
> > which
> > > > is redundant;
> > > >
> > > > 4. added a message sequence number to the msg info structure to
> > > > identify socket
> > > > messages, and the socket message exchange does not need to be
> > > > blocking;
> > > >
> > > > 5. changed to used uuid to identify each VM rather than using the
> > > > QEMU
> > process
> > > > id
> > > >
> > >
> > > One more point should be added is that the server needs to send
> > > periodic socket messages to check if the driver VM is still alive. I
> > > will add this message support in next version. (*v2-AR1*)
> >
> > Either the driver VM could go down or the device VM (server) could go
> > down. In both cases there must be a way to handle the situation.
> >
> > If the server VM goes down it should be possible for the driver VM to
> > resume either via hotplug of a new device or through messages
> > reinitializing the dead device when the server VM restarts.
>
> I got feedbacks from people that the name of device VM and driver VM
> are difficult to remember. Can we use client (or frontend) VM and
> server (or backend) VM in the discussion? I think that would sound
> more straightforward :)
So server is the device VM?
Sounds even more confusing to me :)
frontend/backend is kind of ok if you really
prefer it, but let's add some text that explains how this translates to
device/driver that rest of text uses.
>
> Here are the two cases:
>
> Case 1: When the client VM powers off, the server VM will notice that
> the connection is closed (the client calls the socket close()
> function, which notifies the server about the disconnection). Then the
> server will need to remove the vhost-pci device for that client VM.
> When the client VM boots up and connects to the server again, the
> server VM re-establishes the inter-VM communication channel (i.e.
> creating a new vhost-pci device and hot-plugging it to the server VM).
So on reset you really must wait for backend to stop
doing things before you proceed. Closing socket won't
do this, it's asynchronous.
> Case 2: When the server VM powers off, the client doesn't need to do
> anything. We can provide a way in QEMU monitor to re-establish the
> connection. So, when the server boots up again, the admin can let a
> client connect to the server via the client side QEMU monitor.
>
> Best,
> Wei
>
>
You need server to be careful though.
If it leaves the rings in an inconsistent state, there's a problem.
--
MST