qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-block] [PATCH v3 2/6] block: Add VFIO based NVMe driver


From: Fam Zheng
Subject: Re: [Qemu-block] [PATCH v3 2/6] block: Add VFIO based NVMe driver
Date: Fri, 7 Jul 2017 07:27:27 +0800
User-agent: Mutt/1.8.0 (2017-02-23)

On Thu, 07/06 13:38, Keith Busch wrote:
> On Wed, Jul 05, 2017 at 09:36:31PM +0800, Fam Zheng wrote:
> > This is a new protocol driver that exclusively opens a host NVMe
> > controller through VFIO. It achieves better latency than linux-aio by
> > completely bypassing host kernel vfs/block layer.
> > 
> >     $rw-$bs-$iodepth  linux-aio     nvme://
> >     ----------------------------------------
> >     randread-4k-1     8269          8851
> >     randread-512k-1   584           610
> >     randwrite-4k-1    28601         34649
> >     randwrite-512k-1  1809          1975
> > 
> > The driver also integrates with the polling mechanism of iothread.
> > 
> > This patch is co-authored by Paolo and me.
> > 
> > Signed-off-by: Fam Zheng <address@hidden>
> 
> I haven't much time to do a thorough review, but in the brief time so
> far the implementation looks fine to me.

Thanks for taking a look!

> 
> I am wondering, though, if an NVMe vfio driver can be done as its own
> program that qemu can link to. The SPDK driver comes to mind as such an
> example, but it may create undesirable dependencies.

Yes, good question. I will take a look at the current SPDK driver codebase to
see if it can be linked this way. When I started this work, SPDK doesn't work
with guest memory, because it requires apps to use its own hugepage powered
allocators. This may have changed because I know it gained a vhost-user-scsi
implementation (but that is a different story, together with vhost-user-blk).

Fam



reply via email to

[Prev in Thread] Current Thread [Next in Thread]