qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-block] [PATCH v6 3/9] block: Add VFIO based NVMe driver


From: Eric Blake
Subject: Re: [Qemu-block] [PATCH v6 3/9] block: Add VFIO based NVMe driver
Date: Tue, 13 Feb 2018 13:18:48 -0600
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.6.0

On 01/16/2018 12:08 AM, Fam Zheng wrote:
This is a new protocol driver that exclusively opens a host NVMe
controller through VFIO. It achieves better latency than linux-aio by
completely bypassing host kernel vfs/block layer.

     $rw-$bs-$iodepth  linux-aio     nvme://
     ----------------------------------------
     randread-4k-1     10.5k         21.6k
     randread-512k-1   745           1591
     randwrite-4k-1    30.7k         37.0k
     randwrite-512k-1  1945          1980

     (unit: IOPS)

The driver also integrates with the polling mechanism of iothread.

This patch is co-authored by Paolo and me.

Signed-off-by: Paolo Bonzini <address@hidden>
Signed-off-by: Fam Zheng <address@hidden>
Message-Id: <address@hidden>
---

Sorry for not noticing sooner, but

+static int64_t coroutine_fn nvme_co_get_block_status(BlockDriverState *bs,
+                                                     int64_t sector_num,
+                                                     int nb_sectors, int *pnum,
+                                                     BlockDriverState **file)
+{
+    *pnum = nb_sectors;
+    *file = bs;
+
+    return BDRV_BLOCK_ALLOCATED | BDRV_BLOCK_OFFSET_VALID |
+           (sector_num << BDRV_SECTOR_BITS);

This is wrong. Drivers should only ever return BDRV_BLOCK_DATA (which io.c then _adds_ BDRV_BLOCK_ALLOCATED to, as needed). I'll fix it up as part of my byte-based block status series (v8 coming up soon).

--
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3266
Virtualization:  qemu.org | libvirt.org



reply via email to

[Prev in Thread] Current Thread [Next in Thread]