qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 0/2] virtio-blk: add iothread-vq-mapping parameter


From: Stefan Hajnoczi
Subject: Re: [PATCH v2 0/2] virtio-blk: add iothread-vq-mapping parameter
Date: Tue, 7 Nov 2023 11:29:55 +0800

On Thu, Nov 02, 2023 at 03:10:52PM +0100, Kevin Wolf wrote:
> Am 18.09.2023 um 18:16 hat Stefan Hajnoczi geschrieben:
> > virtio-blk and virtio-scsi devices need a way to specify the mapping between
> > IOThreads and virtqueues. At the moment all virtqueues are assigned to a 
> > single
> > IOThread or the main loop. This single thread can be a CPU bottleneck, so 
> > it is
> > necessary to allow finer-grained assignment to spread the load. With this
> > series applied, "pidstat -t 1" shows that guests with -smp 2 or higher are 
> > able
> > to exploit multiple IOThreads.
> > 
> > This series introduces command-line syntax for the new iothread-vq-mapping
> > property is as follows:
> > 
> >   --device 
> > '{"driver":"virtio-blk-pci","iothread-vq-mapping":[{"iothread":"iothread0","vqs":[0,1,2]},...]},...'
> > 
> > IOThreads are specified by name and virtqueues are specified by 0-based
> > index.
> > 
> > It will be common to simply assign virtqueues round-robin across a set
> > of IOThreads. A convenient syntax that does not require specifying
> > individual virtqueue indices is available:
> > 
> >   --device 
> > '{"driver":"virtio-blk-pci","iothread-vq-mapping":[{"iothread":"iothread0"},{"iothread":"iothread1"},...]},...'
> > 
> > There is no way to reassign virtqueues at runtime and I expect that to be a
> > very rare requirement.
> > 
> > Note that JSON --device syntax is required for the iothread-vq-mapping
> > parameter because it's non-scalar.
> > 
> > Based-on: 20230912231037.826804-1-stefanha@redhat.com ("[PATCH v3 0/5] 
> > block-backend: process I/O in the current AioContext")
> 
> Does this strictly depend on patch 5/5 of that series, or would it just
> be a missed opportunity for optimisation by unnecessarily running some
> requests from a different thread?

I looked at the issue with PATCH 5/5 more and didn't find a minimal
solution that I can implement today for soft freeze. There are too much
inconsistency between blk_remove_bs() in whether or not the AioContext
is acquired:

block/block-backend.c:        blk_remove_bs(blk); <- blk_unref (can't tell if 
AioContext is acquired)
block/block-backend.c:            blk_remove_bs(blk); (acquired)
block/monitor/block-hmp-cmds.c:        blk_remove_bs(blk); (acquired)
block/qapi-sysemu.c:    blk_remove_bs(blk); (acquired)
block/qapi-sysemu.c:            blk_remove_bs(blk); (not acquired)
qemu-nbd.c:        blk_remove_bs(blk); (not acquired)
tests/unit/test-block-iothread.c:    blk_remove_bs(blk); (acquired)
tests/unit/test-blockjob.c:    blk_remove_bs(blk); (sometimes acquired, 
sometimes not)

They usually get away with it because BDRV_WAIT_WHILE() only unlocks the
AioContext when the BlockDriverState's AioContext is not the current
thread's home context. This means main loop code works when the
AioContext is not acquired as long as the BDS AioContext is the main
loop AioContext.

The solution I have confidence in is to stop using the AioContext lock,
but it will take more time to refactor the SCSI layer (the last real
user of the AioContext).

I'm afraid iothread-vq-mapping can't make it into QEMU 8.2.

Stefan

Attachment: signature.asc
Description: PGP signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]