qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Emulating device configuration / max_virtqueue_pairs in vhost-vdpa a


From: Eugenio Perez Martin
Subject: Re: Emulating device configuration / max_virtqueue_pairs in vhost-vdpa and vhost-user
Date: Tue, 31 Jan 2023 20:11:06 +0100

On Tue, Jan 31, 2023 at 8:10 PM Eugenio Perez Martin
<eperezma@redhat.com> wrote:
>
> Hi,
>
> The current approach of offering an emulated CVQ to the guest and map
> the commands to vhost-user is not scaling well:
> * Some devices already offer it, so the transformation is redundant.
> * There is no support for commands with variable length (RSS?)
>
> We can solve both of them by offering it through vhost-user the same
> way as vhost-vdpa do. With this approach qemu needs to track the
> commands, for similar reasons as vhost-vdpa: qemu needs to track the
> device status for live migration. vhost-user should use the same SVQ
> code for this, so we avoid duplications.
>
> One of the challenges here is to know what virtqueue to shadow /
> isolate. The vhost-user device may not have the same queues as the
> device frontend:
> * The first depends on the actual vhost-user device, and qemu fetches
> it with VHOST_USER_GET_QUEUE_NUM at the moment.
> * The qemu device frontend's is set by netdev queues= cmdline parameter in 
> qemu
>
> For the device, the CVQ is the last one it offers, but for the guest
> it is the last one offered in config space.
>
> To create a new vhost-user command to decrease that maximum number of
> queues may be an option. But we can do it without adding more
> commands, remapping the CVQ index at virtqueue setup. I think it
> should be doable using (struct vhost_dev).vq_index and maybe a few
> adjustments here and there.
>
> Thoughts?
>
> Thanks!


(Starting a separated thread to vhost-vdpa related use case)

This could also work for vhost-vdpa if we ever decide to honor netdev
queues argument. It is totally ignored now, as opposed to the rest of
backends:
* vhost-kernel, whose tap device has the requested number of queues.
* vhost-user, that errors with ("you are asking more queues than
supported") if the vhost-user parent device has less queues than
requested (by vhost-user msg VHOST_USER_GET_QUEUE_NUM).

One of the reasons for this is that device configuration space is
totally passthrough, with the values for mtu, rss conditions, etc.
This is not ideal, as qemu cannot check src and destination
equivalence and they can change under the feets of the guest in the
event of a migration. External tools are needed for this, duplicating
part of the effort.

Start intercepting config space accesses and offering an emulated one
to the guest with this kind of adjustments is beneficial, as it makes
vhost-vdpa more similar to the rest of backends, making the surprise
on a change way lower.

Thoughts?

Thanks!




reply via email to

[Prev in Thread] Current Thread [Next in Thread]