qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC v3 1/8] blkio: add io_uring block driver using libblkio


From: Kevin Wolf
Subject: Re: [RFC v3 1/8] blkio: add io_uring block driver using libblkio
Date: Wed, 3 Aug 2022 15:30:38 +0200

Am 03.08.2022 um 14:25 hat Peter Krempa geschrieben:
> On Wed, Jul 27, 2022 at 21:33:40 +0200, Kevin Wolf wrote:
> > Am 08.07.2022 um 06:17 hat Stefan Hajnoczi geschrieben:
> > > libblkio (https://gitlab.com/libblkio/libblkio/) is a library for
> > > high-performance disk I/O. It currently supports io_uring and
> > > virtio-blk-vhost-vdpa with additional drivers under development.
> > > 
> > > One of the reasons for developing libblkio is that other applications
> > > besides QEMU can use it. This will be particularly useful for
> > > vhost-user-blk which applications may wish to use for connecting to
> > > qemu-storage-daemon.
> > > 
> > > libblkio also gives us an opportunity to develop in Rust behind a C API
> > > that is easy to consume from QEMU.
> > > 
> > > This commit adds io_uring and virtio-blk-vhost-vdpa BlockDrivers to QEMU
> > > using libblkio. It will be easy to add other libblkio drivers since they
> > > will share the majority of code.
> > > 
> > > For now I/O buffers are copied through bounce buffers if the libblkio
> > > driver requires it. Later commits add an optimization for
> > > pre-registering guest RAM to avoid bounce buffers.
> > > 
> > > The syntax is:
> > > 
> > >   --blockdev 
> > > io_uring,node-name=drive0,filename=test.img,readonly=on|off,cache.direct=on|off
> > > 
> > > and:
> > > 
> > >   --blockdev 
> > > virtio-blk-vhost-vdpa,node-name=drive0,path=/dev/vdpa...,readonly=on|off
> > > 
> > > Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> > 
> > The subject line implies only io_uring, but you actually add vhost-vdpa
> > support, too. I think the subject line should be changed.
> > 
> > I think it would also make sense to already implement support for
> > vhost-user-blk on the QEMU side even if support isn't compiled in
> > libblkio by default and opening vhost-user-blk images would therefore
> > always fail with a default build.
> > 
> > But then you could run QEMU with a custom build of libblkio to make use
> > of it without patching QEMU. This is probably useful for getting libvirt
> > support for using a storage daemon implemented without having to wait
> > for another QEMU release. (Peter, do you have any opinion on this?)
> 
> How will this work in terms of detecting whether that feature is
> present?
> 
> The issue is that libvirt caches capabilities of qemu and the cache is
> invalidated based on the timestamp of the qemu binary (and few other
> mostly host kernel and cpu properties). In case when a backend library
> is updated/changed this probably means that libvirt will not be able to
> detect that qemu gained support.

How is this done with other libraries? We use a few more storage
libraries and depending on their version, we may or may not be able to
provide some feature. I assume we always just ignored this and if you
don't have the right version, you get runtime errors.

> In case when qemu lies about the support even if the backend library
> doesn't suport it then we have a problem in not being even able to see
> whether we can use it.

I'm not sure if I would call it "lying", it's just that we have a static
QAPI schema that can only represent what the QEMU binary could
theoretically handle, but not dynamically what is actually available at
runtime.

Another option would be to either add an API to libblkio that returns a
list of supported drivers or probe it with a pair of blkio_create() and
blkio_destroy() before registering the QEMU drivers. QEMU and qemu-img
can print a list of registered read-write and read-only block drivers
and I think libvirt has been using that?

Of course, it doesn't change anything about the fact that this list
can change between two QEMU runs if you replace the library, but don't
touch QEMU.

Kevin




reply via email to

[Prev in Thread] Current Thread [Next in Thread]