qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 2/2] virtio: fix IO request length in virtio SCS


From: Roman Kagan
Subject: Re: [Qemu-devel] [PATCH 2/2] virtio: fix IO request length in virtio SCSI/block
Date: Tue, 19 Dec 2017 11:57:46 +0300
User-agent: Mutt/1.9.1 (2017-09-22)

On Mon, Dec 18, 2017 at 10:42:35PM +0300, Denis V. Lunev wrote:
> On 12/18/2017 10:35 PM, Felipe Franciosi wrote:
> >> On 18 Dec 2017, at 16:16, Harris, James R <address@hidden> wrote:
> >>> On Dec 18, 2017, at 6:38 AM, Stefan Hajnoczi <address@hidden> wrote:
> >>> On Fri, Dec 15, 2017 at 06:02:50PM +0300, Denis V. Lunev wrote:
> >>>> diff --git a/include/hw/compat.h b/include/hw/compat.h
> >>>> index 026fee9..b9be5d7 100644
> >>>> --- a/include/hw/compat.h
> >>>> +++ b/include/hw/compat.h
> >>>> @@ -2,6 +2,23 @@
> >>>> #define HW_COMPAT_H
> >>>>
> >>>> #define HW_COMPAT_2_11 \
> >>>> +    {\
> >>>> +        .driver   = "virtio-blk-device",\
> >>>> +        .property = "max_segments",\
> >>>> +        .value    = "126",\
> >>>> +    },{\
> >>>> +        .driver   = "vhost-scsi",\
> >>>> +        .property = "max_segments",\
> >>>> +        .value    = "126",\
> >>>> +    },{\
> >>>> +        .driver   = "vhost-user-scsi",\
> >>>> +        .property = "max_segments",\
> >>>> +        .value    = "126",\
> >>> Existing vhost-user-scsi slave programs might not expect up to 1022
> >>> segments.  Hopefully we can get away with this change since there are
> >>> relatively few vhost-user-scsi slave programs.
> >>>
> >>> CCed Felipe (Nutanix) and Jim (SPDK) in case they have comments.
> >> SPDK vhost-user targets only expect max 128 segments.  They also 
> >> pre-allocate I/O task structures when QEMU connects to the vhost-user 
> >> device.
> >>
> >> Supporting up to 1022 segments would result in significantly higher memory 
> >> usage, reduction in I/O queue depth processed by the vhost-user target, or 
> >> having to dynamically allocate I/O task structures - none of which are 
> >> ideal.
> >>
> >> What if this was just bumped from 126 to 128?  I guess I’m trying to 
> >> understand the level of guest and host I/O performance that is gained with 
> >> this patch.  One I/O per 512KB vs. one I/O per 4MB - we are still only 
> >> talking about a few hundred IO/s difference.
> > SeaBIOS also makes the assumption that the queue size is not bigger than 
> > 128 elements.
> > https://review.coreboot.org/cgit/seabios.git/tree/src/hw/virtio-ring.h#n23
> >
> > Perhaps a better approach is to make the value configurable (ie. add the 
> > "max_segments" property), but set the default to 128-2. In addition to what 
> > Jim pointed out, I think there may be other legacy front end drivers which 
> > can assume the ring will be at most 128 entries in size.
> >
> > With that, hypervisors can choose to bump the value higher if it's known to 
> > be safe for their host+guest configuration.
> 
> This should not be a problem at all IMHO. The guest is not obliged
> to use the message of entire possible size. The guest initiates
> request with 128 elements. Fine. QEMU is ready to this.

QEMU is, but vhost-user slaves may not be.  And there seems to be no
vhost-user protocol message type that would allow to negotiate this
value between the master and the slave.

So apparently the default for vhost-user-scsi has to stay the same in
order not to break existing slaves.  I guess having it tunable via a
property may still turn out useful.

Roman.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]