qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] nvme: fix bit buckets


From: Klaus Jensen
Subject: Re: [PATCH] nvme: fix bit buckets
Date: Mon, 25 Apr 2022 11:59:30 +0200

+qemu-devel

On Apr 22 09:37, Keith Busch wrote:
> We can't just ignore the bit buckets since the data offsets read from
> disk need to be accounted for. We could either split into multiple reads
> for the actual user data requested and skip the buckets, or we could
> have a place holder for bucket data. Splitting is too much over head, so
> just allocate a memory region to dump unwanted data.
> 
> Signed-off-by: Keith Busch <kbusch@kernel.org>
> ---
> This came out easier than I thought, so we can ignore my previous
> feature removal patch:
>   https://lists.nongnu.org/archive/html/qemu-block/2022-04/msg00398.html
> 
>  hw/nvme/ctrl.c | 9 +++++----
>  hw/nvme/nvme.h | 1 +
>  2 files changed, 6 insertions(+), 4 deletions(-)
> 
> diff --git a/hw/nvme/ctrl.c b/hw/nvme/ctrl.c
> index 03760ddeae..711c6fac29 100644
> --- a/hw/nvme/ctrl.c
> +++ b/hw/nvme/ctrl.c
> @@ -845,11 +845,11 @@ static uint16_t nvme_map_sgl_data(NvmeCtrl *n, NvmeSg 
> *sg,
>          trans_len = MIN(*len, dlen);
>  
>          if (type == NVME_SGL_DESCR_TYPE_BIT_BUCKET) {
> -            goto next;
> +            addr = n->bitBucket.addr;
> +        } else {
> +            addr = le64_to_cpu(segment[i].addr);
>          }
>  
> -        addr = le64_to_cpu(segment[i].addr);
> -
>          if (UINT64_MAX - addr < dlen) {
>              return NVME_DATA_SGL_LEN_INVALID | NVME_DNR;
>          }
> @@ -859,7 +859,6 @@ static uint16_t nvme_map_sgl_data(NvmeCtrl *n, NvmeSg *sg,
>              return status;
>          }
>  
> -next:
>          *len -= trans_len;
>      }
>  
> @@ -6686,6 +6685,8 @@ static int nvme_init_pci(NvmeCtrl *n, PCIDevice 
> *pci_dev, Error **errp)
>          nvme_init_pmr(n, pci_dev);
>      }
>  
> +    memory_region_init(&n->bitBucket, OBJECT(n), NULL, 0x100000);
> +
>      return 0;
>  }
>  
> diff --git a/hw/nvme/nvme.h b/hw/nvme/nvme.h
> index 739c8b8f79..d59eadc69d 100644
> --- a/hw/nvme/nvme.h
> +++ b/hw/nvme/nvme.h
> @@ -411,6 +411,7 @@ typedef struct NvmeCtrl {
>      PCIDevice    parent_obj;
>      MemoryRegion bar0;
>      MemoryRegion iomem;
> +    MemoryRegion bitBucket;
>      NvmeBar      bar;
>      NvmeParams   params;
>      NvmeBus      bus;
> -- 
> 2.30.2
> 

The approach is neat and simple, but I don't think it has the intended
effect. The memory region addr is just left at 0x0, so we just end up
with mapping that directly into the qsg and in my test setup, this
basically does DMA to the admin completion queue which happens to be at
0x0.

I would have liked to handle it like we do for CMB addresses, and
reserve some address known to the device (i.e. remapping to a local
allocated buffer), but we can't easily do that because of the iov/qsg
duality thingie. The dma helpers wont work with it.

For now, I think we need to just rip out the bit bucket support.

Attachment: signature.asc
Description: PGP signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]