qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v2 2/2] vhost: double check used memslots number


From: Michael S. Tsirkin
Subject: Re: [Qemu-devel] [PATCH v2 2/2] vhost: double check used memslots number
Date: Fri, 22 Dec 2017 23:15:09 +0200

On Fri, Dec 22, 2017 at 07:48:55PM +0100, Igor Mammedov wrote:
> On Fri, 15 Dec 2017 16:45:55 +0800
> Jay Zhou <address@hidden> wrote:
> 
> > If the VM already has N(N>8) available memory slots for vhost user,
> > the VM will be crashed in vhost_user_set_mem_table if we try to
> > hotplug the first vhost user NIC.
> > This patch checks if memslots number exceeded or not after updating
> > vhost_user_used_memslots.
> Can't understand commit message, pls rephrase (what is being fixed, and how 
> it's fixed)
> also include reproducing steps for crash and maybe describe call 
> flow/backtrace
> that triggers crash.
> 
> PS:
> I wasn't able to reproduce crash
> 
> > 
> > Signed-off-by: Jay Zhou <address@hidden>
> > ---
> >  hw/virtio/vhost.c | 27 +++++++++++++++++++++++----
> >  1 file changed, 23 insertions(+), 4 deletions(-)
> > 
> > diff --git a/hw/virtio/vhost.c b/hw/virtio/vhost.c
> > index 59a32e9..e45f5e2 100644
> > --- a/hw/virtio/vhost.c
> > +++ b/hw/virtio/vhost.c
> > @@ -1234,6 +1234,18 @@ static void vhost_virtqueue_cleanup(struct 
> > vhost_virtqueue *vq)
> >      event_notifier_cleanup(&vq->masked_notifier);
> >  }
> >  
> > +static bool vhost_dev_used_memslots_is_exceeded(struct vhost_dev *hdev)
> > +{
> > +    if (hdev->vhost_ops->vhost_get_used_memslots() >
> > +        hdev->vhost_ops->vhost_backend_memslots_limit(hdev)) {
> > +        error_report("vhost backend memory slots limit is less"
> > +                " than current number of present memory slots");
> > +        return true;
> > +    }
> > +
> > +    return false;
> > +}
> > +
> >  int vhost_dev_init(struct vhost_dev *hdev, void *opaque,
> >                     VhostBackendType backend_type, uint32_t 
> > busyloop_timeout)
> >  {
> > @@ -1252,10 +1264,7 @@ int vhost_dev_init(struct vhost_dev *hdev, void 
> > *opaque,
> >          goto fail;
> >      }
> >  
> > -    if (hdev->vhost_ops->vhost_get_used_memslots() >
> > -        hdev->vhost_ops->vhost_backend_memslots_limit(hdev)) {
> > -        error_report("vhost backend memory slots limit is less"
> > -                " than current number of present memory slots");
> > +    if (vhost_dev_used_memslots_is_exceeded(hdev)) {
> why do you keep this check?
> it seems always be false
> 
> 
> 
> >          r = -1;
> >          goto fail;
> >      }
> > @@ -1341,6 +1350,16 @@ int vhost_dev_init(struct vhost_dev *hdev, void 
> > *opaque,
> >      hdev->memory_changed = false;
> >      memory_listener_register(&hdev->memory_listener, 
> > &address_space_memory);
> >      QLIST_INSERT_HEAD(&vhost_devices, hdev, entry);
> > +
> > +    if (vhost_dev_used_memslots_is_exceeded(hdev)) {
> > +        r = -1;
> > +        if (busyloop_timeout) {
> > +            goto fail_busyloop;
> > +        } else {
> > +            goto fail;
> > +        }
> > +    }
> seem to be right thing to do, since after registering listener for the first 
> time
> used_memslots will be updated to actual value.
> 
> 
> I did some testing and without this hunk/patch
> 
> on 'device_add  virtio-net-pci,netdev=net0' qemu prints:
> 
> qemu-system-x86_64: vhost_set_mem_table failed: Argument list too long (7)
> qemu-system-x86_64: unable to start vhost net: 7: falling back on userspace 
> virtio
> 
> and network is operational in guest, but with this patch
> 
> "netdev_add ...,vhost-on" prints:
> 
> vhost backend memory slots limit is less than current number of present 
> memory slots
> vhost-net requested but could not be initialized
> 
> and following "device_add  virtio-net-pci,netdev=net0" prints:
> 
> TUNSETOFFLOAD ioctl() failed: Bad file descriptor
> TUNSETOFFLOAD ioctl() failed: Bad file descriptor
> 
> adapter is still hot-plugged but guest networking is broken (can't get IP 
> address via DHCP)
> 
> so patch seems introduces a regression or something broken elsewhere and this 
> exposes issue,
> not sure what qemu reaction should be in this case
>  i.e. when netdev_add fails 
>     1: should we fail followed up device_add or 
>     2: make it fall back to userspace virtio
> 
> I'd go for #2,
> Michael what's your take on it?

OK but there's a vhost force flag, if that is set we definitely should
fail device_add.

Also, hotplug can follow device_add, should be handled similarly.

> > +
> >      return 0;
> >  
> >  fail_busyloop:



reply via email to

[Prev in Thread] Current Thread [Next in Thread]