qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v11 00/13] hw/block/nvme: Support Namespace Types and Zoned N


From: Keith Busch
Subject: Re: [PATCH v11 00/13] hw/block/nvme: Support Namespace Types and Zoned Namespace Command Set
Date: Sat, 6 Feb 2021 01:13:26 +0900
User-agent: Mutt/1.12.1 (2019-06-15)

On Sat, Feb 06, 2021 at 01:07:57AM +0900, Minwoo Im wrote:
> On 21-02-05 08:02:10, Keith Busch wrote:
> > On Fri, Feb 05, 2021 at 09:33:54PM +0900, Minwoo Im wrote:
> > > On 21-02-05 12:42:30, Klaus Jensen wrote:
> > > > On Feb  5 12:25, info@dantalion.nl wrote:
> > > > > On 05-02-2021 11:39, Klaus Jensen wrote:
> > > > > > This is a good way to report it ;)
> > > > > > It is super helpful and super appreciated! Thanks!
> > > > > 
> > > > > Good to know :)
> > > > > 
> > > > > > I cant reproduce that. Can you share your qemu configuration, kernel
> > > > > > version?
> > > > > 
> > > > > I create the image and launch QEMU with:
> > > > > qemu-img create -f raw znsssd.img 16777216
> > > > > 
> > > > > qemu-system-x86_64 -name qemuzns -m 4G -cpu Haswell -smp 2 -hda \
> > > > > ./arch-qemu.qcow2 -net user,hostfwd=tcp::7777-:22,\
> > > > > hostfwd=tcp::2222-:2000 -net nic \
> > > > > -drive file=./znsssd.img,id=mynvme,format=raw,if=none \
> > > > > -device nvme-subsys,id=subsys0 \
> > > > > -device nvme,serial=baz,id=nvme2,zoned.append_size_limit=131072,\
> > > > > subsys=subsys0 \
> > > > > -device nvme-ns,id=ns2,drive=mynvme,nsid=2,logical_block_size=4096,\
> > > > > physical_block_size=4096,zoned=true,zoned.zone_size=131072,\
> > > > > zoned.zone_capacity=131072,zoned.max_open=0,zoned.max_active=0,bus=nvme2
> > > > > 
> > > > > This should create 128 zones as 16777216 / 131072 = 128. My qemu 
> > > > > version
> > > > > is on d79d797b0dd02c33dc9428123c18ae97127e967b of nvme-next.
> > > > > 
> > > > > I don't actually think the subsys is needed when you use bus=, that is
> > > > > just something left over from trying to identify why the nvme device 
> > > > > was
> > > > > not initializing.
> > > > > 
> > > > > I use an Arch qcow image with kernel version 5.10.12
> > > > 
> > > > Thanks - I can reproduce it now.
> > > > 
> > > > Happens only when the subsystem is involved. Looks like a kernel issue
> > > > to me since the zones are definitely there when using nvme-cli.
> > > 
> > > Yes, it looks like it happens when CONFIG_NVME_MULTIPATH=y and subsys is
> > > given for namespace sharing.  In that case, the actual hidden namespace
> > > for nvme0n1 might be nvme0c0n1.
> > > 
> > > lrwxrwxrwx 1 root root 0 Feb  5 12:30 /sys/block/nvme0c0n1 -> 
> > > ../devices/pci0000:00/0000:00:06.0/nvme/nvme0/nvme0c0n1/
> > > lrwxrwxrwx 1 root root 0 Feb  5 12:30 /sys/block/nvme0n1 -> 
> > > ../devices/virtual/nvme-subsystem/nvme-subsys0/nvme0n1/   
> > > 
> > > cat /sys/block/nvme0c0n1/queue/nr_zones returns proper value.
> > > 
> > > > 
> > > > Stuff also seems to be initialized in the kernel since blkzone report
> > > > works.
> > > > 
> > > > Keith, this might be some fun for you :) ?
> > > 
> > > I also really want to ask about the policy of head namespace policy
> > > in kernel. :)
> > 
> > What's the question? It looks like I'm missing some part of the context.
> 
> If multipath is enabled, the namespace head and hidden namespace will be
> created.  In this case, /sys/block/nvme0n1/queue/nr_zones are not
> returning proper value for the namespace itself.  By the way, the hidden
> namespace /sys/block/nvme0c0n1/queue/nr_zones are returning properly.
> 
> Is it okay for sysfs of the head namespace node (nvme0n1) not to manage
> the request queue attributes like nr_zones?

Gotcha.

The q->nr_zones is not a stacking limit, so the virtual device that's
made visible is not inheriting the path device that contains this
setting. I'll see about getting a kernel fix proposed.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]