qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] virtio-net: do not start queues that are not en


From: Michael S. Tsirkin
Subject: Re: [Qemu-devel] [PATCH] virtio-net: do not start queues that are not enabled by the guest
Date: Thu, 28 Feb 2019 09:08:55 -0500

On Thu, Feb 21, 2019 at 10:18:52AM +0200, Yuri Benditovich wrote:
> On Thu, Feb 21, 2019 at 8:49 AM Jason Wang <address@hidden> wrote:
> >
> >
> > On 2019/2/21 下午2:00, Yuri Benditovich wrote:
> > > On Tue, Feb 19, 2019 at 8:27 AM Jason Wang <address@hidden> wrote:
> > >>
> > >> On 2019/2/19 上午7:34, Michael S. Tsirkin wrote:
> > >>> On Mon, Feb 18, 2019 at 10:49:08PM +0200, Yuri Benditovich wrote:
> > >>>> On Mon, Feb 18, 2019 at 6:39 PM Michael S. Tsirkin <address@hidden> 
> > >>>> wrote:
> > >>>>> On Mon, Feb 18, 2019 at 11:58:51AM +0200, Yuri Benditovich wrote:
> > >>>>>> On Mon, Feb 18, 2019 at 5:49 AM Jason Wang <address@hidden> wrote:
> > >>>>>>> On 2019/2/13 下午10:51, Yuri Benditovich wrote:
> > >>>>>>>> https://bugzilla.redhat.com/show_bug.cgi?id=1608226
> > >>>>>>>> On startup/link-up in multiqueue configuration the virtio-net
> > >>>>>>>> tries to starts all the queues, including those that the guest
> > >>>>>>>> will not enable by VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET.
> > >>>>>>>> If the guest driver does not allocate queues that it will not
> > >>>>>>>> use (for example, Windows driver does not) and number of actually
> > >>>>>>>> used queues is less that maximal number supported by the device,
> > >>>>>>> Is this a requirement of e.g NDIS? If not, could we simply allocate 
> > >>>>>>> all
> > >>>>>>> queues in this case. This is usually what normal Linux driver did.
> > >>>>>>>
> > >>>>>>>
> > >>>>>>>> this causes vhost_net_start to fail and actually disables vhost
> > >>>>>>>> for all the queues, reducing the performance.
> > >>>>>>>> Current commit fixes this: initially only first queue is started,
> > >>>>>>>> upon VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET started all the queues
> > >>>>>>>> requested by the guest.
> > >>>>>>>>
> > >>>>>>>> Signed-off-by: Yuri Benditovich <address@hidden>
> > >>>>>>>> ---
> > >>>>>>>>     hw/net/virtio-net.c | 7 +++++--
> > >>>>>>>>     1 file changed, 5 insertions(+), 2 deletions(-)
> > >>>>>>>>
> > >>>>>>>> diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
> > >>>>>>>> index 3f319ef723..d3b1ac6d3a 100644
> > >>>>>>>> --- a/hw/net/virtio-net.c
> > >>>>>>>> +++ b/hw/net/virtio-net.c
> > >>>>>>>> @@ -174,7 +174,7 @@ static void virtio_net_vhost_status(VirtIONet 
> > >>>>>>>> *n, uint8_t status)
> > >>>>>>>>     {
> > >>>>>>>>         VirtIODevice *vdev = VIRTIO_DEVICE(n);
> > >>>>>>>>         NetClientState *nc = qemu_get_queue(n->nic);
> > >>>>>>>> -    int queues = n->multiqueue ? n->max_queues : 1;
> > >>>>>>>> +    int queues = n->multiqueue ? n->curr_queues : 1;
> > >>>>>>>>
> > >>>>>>>>         if (!get_vhost_net(nc->peer)) {
> > >>>>>>>>             return;
> > >>>>>>>> @@ -1016,9 +1016,12 @@ static int virtio_net_handle_mq(VirtIONet 
> > >>>>>>>> *n, uint8_t cmd,
> > >>>>>>>>             return VIRTIO_NET_ERR;
> > >>>>>>>>         }
> > >>>>>>>>
> > >>>>>>>> -    n->curr_queues = queues;
> > >>>>>>>>         /* stop the backend before changing the number of queues 
> > >>>>>>>> to avoid handling a
> > >>>>>>>>          * disabled queue */
> > >>>>>>>> +    virtio_net_set_status(vdev, 0);
> > >>>>>>> Any reason for doing this?
> > >>>>>> I think there are 2 reasons:
> > >>>>>> 1. The spec does not require guest SW to allocate unused queues.
> > >>>>>> 2. We spend guest's physical memory to just make vhost happy when it
> > >>>>>> touches queues that it should not use.
> > >>>>>>
> > >>>>>> Thanks,
> > >>>>>> Yuri Benditovich
> > >>>>> The spec also says:
> > >>>>>           queue_enable The driver uses this to selectively prevent 
> > >>>>> the device from executing requests from this
> > >>>>>           virtqueue. 1 - enabled; 0 - disabled.
> > >>>>>
> > >>>>> While this is not a conformance clause this strongly implies that
> > >>>>> queues which are not enabled are never accessed by device.
> > >>>>>
> > >>>>> Yuri I am guessing you are not enabling these unused queues right?
> > >>>> Of course, we (Windows driver) do not.
> > >>>> The code of virtio-net passes max_queues to vhost and this causes
> > >>>> vhost to try accessing all the queues, fail on unused ones and finally
> > >>>> leave vhost disabled at all.
> > >>> Jason, at least for 1.0 accessing disabled queues looks like a spec
> > >>> violation. What do you think?
> > >>
> > >> Yes, but there's some issues:
> > >>
> > >> - How to detect a disabled queue for 0.9x device? Looks like there's no
> > >> way according to the spec, so device must assume all queues was enabled.
> > > Can you please add several words - what is 0.9 device (probably this
> > > is more about driver) and
> > > what is the problem with it?
> >
> >
> > It's not a net specific issue. 0.9x device is legacy device defined in
> > the spec. We don't have a method to disable and enable a specific queue
> > at that time. Michael said we can assume queue address 0 as disabled,
> > but there's still a question of how to enable it. Spec is unclear and it
> > was too late to add thing for legacy device. For 1.0 device we have
> > queue_enable, but its implementation is incomplete, since it can work
> > with vhost correctly, we probably need to add thing to make it work.
> >
> >
> > >
> > >> - For 1.0, if we depends on queue_enable, we should implement the
> > >> callback for vhost I think. Otherwise it's still buggy.
> > >>
> > >> So it looks tricky to enable and disable queues through set status
> > > If I succeed to modify the patch such a way that it will act only in
> > > 'target' case,
> > > i.e. only if some of queueus are not initialized (at time of
> > > driver_ok), will it be more safe?
> >
> >
> > For 1.0 device, we can fix the queue_enable, but for 0.9x device how do
> > you enable one specific queue in this case? (setting status?)
> >
> 
> Do I understand correctly that for 0.9 device in some cases the device will
> receive feature _MQ set, but will not receive VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET?
> Or the problem is different?

No. For 0.9 it is possible that device adds buffers before DRIVER_OK.


> > A fundamental question is what prevents you from just initialization all
> > queues during driver start? It looks to me this save lots of efforts
> > than allocating queue dynamically.
> >
> 
> This is not so trivial in Windows driver, as it does not have objects for 
> queues
> that it does not use. Linux driver first of all allocates all the
> queues and then
> adds Rx/Tx to those it will use. Windows driver first decides how many queues
> it will use then allocates objects for them and initializes them from zero to
> fully functional state.
> 
> > Thanks
> >
> >
> > >
> > >> Thanks
> > >>
> > >>
> > >>>>>
> > >>>>>>> Thanks
> > >>>>>>>
> > >>>>>>>
> > >>>>>>>> +
> > >>>>>>>> +    n->curr_queues = queues;
> > >>>>>>>> +
> > >>>>>>>>         virtio_net_set_status(vdev, vdev->status);
> > >>>>>>>>         virtio_net_set_queues(n);
> > >>>>>>>>



reply via email to

[Prev in Thread] Current Thread [Next in Thread]