qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC PATCH v5 11/26] vhost: Handle host notifiers in SVQ


From: Jason Wang
Subject: Re: [RFC PATCH v5 11/26] vhost: Handle host notifiers in SVQ
Date: Thu, 4 Nov 2021 10:31:05 +0800

On Wed, Nov 3, 2021 at 3:40 PM Eugenio Perez Martin <eperezma@redhat.com> wrote:
>
> On Wed, Nov 3, 2021 at 3:56 AM Jason Wang <jasowang@redhat.com> wrote:
> >
> > On Tue, Nov 2, 2021 at 4:47 PM Eugenio Perez Martin <eperezma@redhat.com> 
> > wrote:
> > >
> > > On Tue, Nov 2, 2021 at 8:55 AM Jason Wang <jasowang@redhat.com> wrote:
> > > >
> > > >
> > > > 在 2021/10/30 上午2:35, Eugenio Pérez 写道:
> > > > > If device supports host notifiers, this makes one jump less (kernel) 
> > > > > to
> > > > > deliver SVQ notifications to it.
> > > > >
> > > > > Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
> > > > > ---
> > > > >   hw/virtio/vhost-shadow-virtqueue.h |  2 ++
> > > > >   hw/virtio/vhost-shadow-virtqueue.c | 23 ++++++++++++++++++++++-
> > > > >   2 files changed, 24 insertions(+), 1 deletion(-)
> > > > >
> > > > > diff --git a/hw/virtio/vhost-shadow-virtqueue.h 
> > > > > b/hw/virtio/vhost-shadow-virtqueue.h
> > > > > index 30ab9643b9..eb0a54f954 100644
> > > > > --- a/hw/virtio/vhost-shadow-virtqueue.h
> > > > > +++ b/hw/virtio/vhost-shadow-virtqueue.h
> > > > > @@ -18,6 +18,8 @@ typedef struct VhostShadowVirtqueue 
> > > > > VhostShadowVirtqueue;
> > > > >   void vhost_svq_set_svq_kick_fd(VhostShadowVirtqueue *svq, int 
> > > > > svq_kick_fd);
> > > > >   const EventNotifier *vhost_svq_get_dev_kick_notifier(
> > > > >                                                 const 
> > > > > VhostShadowVirtqueue *svq);
> > > > > +void vhost_svq_set_host_mr_notifier(VhostShadowVirtqueue *svq, void 
> > > > > *addr);
> > > > > +
> > > > >   void vhost_svq_start(struct vhost_dev *dev, unsigned idx,
> > > > >                        VhostShadowVirtqueue *svq, int svq_kick_fd);
> > > > >   void vhost_svq_stop(struct vhost_dev *dev, unsigned idx,
> > > > > diff --git a/hw/virtio/vhost-shadow-virtqueue.c 
> > > > > b/hw/virtio/vhost-shadow-virtqueue.c
> > > > > index fda60d11db..e3dcc039b6 100644
> > > > > --- a/hw/virtio/vhost-shadow-virtqueue.c
> > > > > +++ b/hw/virtio/vhost-shadow-virtqueue.c
> > > > > @@ -29,6 +29,12 @@ typedef struct VhostShadowVirtqueue {
> > > > >        * So shadow virtqueue must not clean it, or we would lose 
> > > > > VirtQueue one.
> > > > >        */
> > > > >       EventNotifier svq_kick;
> > > > > +
> > > > > +    /* Device's host notifier memory region. NULL means no region */
> > > > > +    void *host_notifier_mr;
> > > > > +
> > > > > +    /* Virtio queue shadowing */
> > > > > +    VirtQueue *vq;
> > > > >   } VhostShadowVirtqueue;
> > > > >
> > > > >   /**
> > > > > @@ -50,7 +56,20 @@ static void vhost_handle_guest_kick(EventNotifier 
> > > > > *n)
> > > > >           return;
> > > > >       }
> > > > >
> > > > > -    event_notifier_set(&svq->hdev_kick);
> > > > > +    if (svq->host_notifier_mr) {
> > > > > +        uint16_t *mr = svq->host_notifier_mr;
> > > > > +        *mr = virtio_get_queue_index(svq->vq);
> > > >
> > > >
> > > > Do we need barriers around the possible MMIO here?
> > >
> > > That's right, I missed them.
> > >
> > > >
> > > > To avoid those complicated stuff, I'd rather simply go with eventfd 
> > > > path.
> > > >
> > > > Note mmio and eventfd are not mutually exclusive.
> > >
> > > Actually we cannot ignore them since they are set in the guest. If SVQ
> > > does nothing about them, the guest's notification will travel directly
> > > to the vdpa device, and SVQ cannot intercept them.
> > >
> > > Taking that into account, it's actually less changes to move them to
> > > SVQ (like in this series) than to disable them (like in previous
> > > series). But we can go with disabling them for sure.
> >
> > I think we can simply disable the memory region for the doorbell, then
> > qemu/kvm will do all the rest for us.
> >
> > If we want to add barriers it would be a lot of architecture specific
> > instructions which looks like a burden for us to maintain in Qemu.
> >
> > So if we disable the memory region, KVM will fallback to the eventfd,
> > then qemu can intercept and we can simply relay it via kickfd. This
> > looks easier to maintain.
> >
> > Thanks
> >
>
> Any reason to go off-list? :).

Hit the wrong button:(

Adding the list back.

>
> I'm fine doing it that way, but it seems to me there must be a way
> since VFIO, UIO, etc would have the same issues. The worst case would
> be that these accesses are resolved through a syscall or similar. How
> does DPDK solve it?

I guess it should have per arch assemblies etc.

> Probably with specific asm as you say...

We can go this way, but to speed up the merging, I'd go with eventfd
first to avoid dependencies. And we can do that in the future as the
performance optimization.

Thanks

>
> Thanks!
>
>
> > >
> > > Thanks!
> > >
> > > >
> > > > Thanks
> > > >
> > > >
> > > > > +    } else {
> > > > > +        event_notifier_set(&svq->hdev_kick);
> > > > > +    }
> > > > > +}
> > > > > +
> > > > > +/*
> > > > > + * Set the device's memory region notifier. addr = NULL clear it.
> > > > > + */
> > > > > +void vhost_svq_set_host_mr_notifier(VhostShadowVirtqueue *svq, void 
> > > > > *addr)
> > > > > +{
> > > > > +    svq->host_notifier_mr = addr;
> > > > >   }
> > > > >
> > > > >   /**
> > > > > @@ -134,6 +153,7 @@ void vhost_svq_stop(struct vhost_dev *dev, 
> > > > > unsigned idx,
> > > > >    */
> > > > >   VhostShadowVirtqueue *vhost_svq_new(struct vhost_dev *dev, int idx)
> > > > >   {
> > > > > +    int vq_idx = dev->vq_index + idx;
> > > > >       g_autofree VhostShadowVirtqueue *svq = 
> > > > > g_new0(VhostShadowVirtqueue, 1);
> > > > >       int r;
> > > > >
> > > > > @@ -151,6 +171,7 @@ VhostShadowVirtqueue *vhost_svq_new(struct 
> > > > > vhost_dev *dev, int idx)
> > > > >           goto err_init_hdev_call;
> > > > >       }
> > > > >
> > > > > +    svq->vq = virtio_get_queue(dev->vdev, vq_idx);
> > > > >       return g_steal_pointer(&svq);
> > > > >
> > > > >   err_init_hdev_call:
> > > >
> > >
> >
>




reply via email to

[Prev in Thread] Current Thread [Next in Thread]