qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC PATCH 1/2] hw/nvme: add mi device


From: Stefan Hajnoczi
Subject: Re: [RFC PATCH 1/2] hw/nvme: add mi device
Date: Mon, 12 Jul 2021 12:03:27 +0100

On Fri, Jul 09, 2021 at 07:25:45PM +0530, Padmakar Kalghatgi wrote:
> The enclosed patch contains the implementation of certain
> commands of nvme-mi specification.The MI commands are useful
> to manage/configure/monitor the device.Eventhough the MI commands
> can be sent via the inband NVMe-MI send/recieve commands, the idea here is
> to emulate the sideband interface for MI.
> 
> Since the nvme-mi specification deals in communicating
> to the nvme subsystem via. a sideband interface, in this
> qemu implementation, virtual-vsock is used for making the
> sideband communication, the guest VM needs to make the
> connection to the specific cid of the vsock of the qemu host.
> 
> One needs to specify the following command in the launch to
> specify the nvme-mi device, cid and to setup the vsock:
> -device nvme-mi,bus=<nvme bus number>
> -device vhost-vsock-pci, guest-cid=<vsock cid>
> 
> The following commands are tested with nvme-cli by hooking
> to the cid of the vsock as shown above and use the socket
> send/recieve commands to issue the commands and get the response.
> 
> we are planning to push the changes for nvme-cli as well to test the
> MI functionality.

Is the purpose of this feature (-device nvme-mi) testing MI with QEMU's
NVMe implementation?

My understanding is that instead of inventing an out-of-band interface
in the form of a new paravirtualized device, you decided to use vsock to
send MI commands from the guest to QEMU?

> As the connection can be established by the guest VM at any point,
> we have created a thread which is looking for a connection request.
> Please suggest if there is a native/better way to handle this.

QEMU has an event-driven architecture and uses threads sparingly. When
it uses threads it uses qemu_create_thread() instead of
pthread_create(), but I suggest using qemu_set_fd_handler() or a
coroutine with QIOChannel to integrate into the QEMU event loop instead.

I didn't see any thread synchronization, so I'm not sure if accessing
NVMe state from the MI thread is safe. Changing the code to use QEMU's
event loop can solve that problem since there's no separate thread.

> This module makes use of the NvmeCtrl structure of the nvme module,
> to fetch relevant information of the nvme device which are used in
> some of the mi commands. Eventhough certain commands might require
> modification to the nvme module, currently we have currently refrained
> from making changes to the nvme module.

Why did you decide to implement -device nvme-mi as a device on
TYPE_NVME_BUS? If the NVMe spec somehow requires this then I'm surprised
that there's no NVMe bus interface (callbacks). It seems like this could
just as easily be a property of an NVMe controller -device
nvme,mi=on|off or -device nvme-subsys,mi=on|off? I'm probably just not
familiar enough with MI and NVMe architecture...

Stefan

Attachment: signature.asc
Description: PGP signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]