qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v11 00/13] hw/block/nvme: Support Namespace Types and Zoned N


From: Dmitry Fomichev
Subject: Re: [PATCH v11 00/13] hw/block/nvme: Support Namespace Types and Zoned Namespace Command Set
Date: Sun, 7 Feb 2021 23:41:45 +0000
User-agent: Evolution 3.38.2 (3.38.2-1.fc33)

On Fri, 2021-02-05 at 11:39 +0100, Klaus Jensen wrote:
> On Feb  5 11:10, info@dantalion.nl wrote:
> > Hello,
> > 
> > Thanks for this, I got everything working including the new device types
> > (nvme-ns, nvme-subsys). I think I have found a small bug and do not know
> > where to report this.
> > 
> 
> This is a good way to report it ;)
> 
> > The values for nvme device property zoned.append_size_limit are not
> > sanity checked, you can set it to invalid values such as 128.
> > 
> > This will latter result in errors when trying to initialize the device:
> > Device not ready; aborting initialisation, CSTS=0x2
> > Removing after probe failure status: -19
> > 
> 
> Yeah. We can at least check that append_size_limit is at least 4k. That
> might still be too small if we run on configurations with larger page
> sizes, and then we cant figure that out until the device is enabled by
> the host anyway. But we can make it a bit more user-friendly in the
> common case.

The current code from nvme-next does validate ZASL value. I tried to set
it to 128 and this results in an error and the namespace doesn't appear at
the guest. The hard minimum is currently the page size.

> 
> > Addtionally, `cat /sys/block/nvmeXnX/queue/nr_zones` reports 0 while
> > `blkzone report /dev/nvmeXnX` clearly shows > 0 zones. Not sure if user
> > error or bug. Also potentially kernel bug and not due to QEMU.
> > 
> 
> I cant reproduce that. Can you share your qemu configuration, kernel
> version?
> 
> > Let me know if sharing this information is helpful or rather just
> > annoying, don't want to bother anyone.
> > 
> 
> It is super helpful and super appreciated! Thanks!


reply via email to

[Prev in Thread] Current Thread [Next in Thread]