qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC patch 0/1] block: vhost-blk backend


From: Andrey Zhadchenko
Subject: Re: [RFC patch 0/1] block: vhost-blk backend
Date: Wed, 5 Oct 2022 12:14:18 +0300
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.1.0

On 10/4/22 21:13, Stefan Hajnoczi wrote:
On Mon, Jul 25, 2022 at 11:55:26PM +0300, Andrey Zhadchenko wrote:
Although QEMU virtio-blk is quite fast, there is still some room for
improvements. Disk latency can be reduced if we handle virito-blk requests
in host kernel so we avoid a lot of syscalls and context switches.

The biggest disadvantage of this vhost-blk flavor is raw format.
Luckily Kirill Thai proposed device mapper driver for QCOW2 format to attach
files as block devices: https://www.spinics.net/lists/kernel/msg4292965.html

Also by using kernel modules we can bypass iothread limitation and finaly scale
block requests with cpus for high-performance devices. This is planned to be
implemented in next version.

Hi Andrey,
Do you have a new version of this patch series that uses multiple
threads?

I have been playing with vq-IOThread mapping in QEMU and would like to
benchmark vhost-blk vs QEMU virtio-blk mq IOThreads:
https://gitlab.com/stefanha/qemu/-/tree/virtio-blk-mq-iothread-prototype

Thanks,
Stefan

Hi Stefan
For now my multi-threaded version is only available for Red Hat 9 5.14.0 kernel. If you really want you can grab it from here: https://lists.openvz.org/pipermail/devel/2022-September/079951.html (kernel)
For QEMU part all you need is adding to vhost_blk_start something like:

#define VHOST_SET_NWORKERS _IOW(VHOST_VIRTIO, 0x1F, int)
ioctl(s->vhostfd, VHOST_SET_NWORKERS, &nworkers);

Or you can wait a bit. I should be able to send second versions by the end of the week (Monday in worst case).

Thanks,
Andrey



reply via email to

[Prev in Thread] Current Thread [Next in Thread]