qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 17/31] vdpa: adapt vhost_ops callbacks to svq


From: Jason Wang
Subject: Re: [PATCH 17/31] vdpa: adapt vhost_ops callbacks to svq
Date: Tue, 8 Feb 2022 11:57:54 +0800
User-agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0) Gecko/20100101 Thunderbird/91.5.1


在 2022/2/1 上午2:58, Eugenio Perez Martin 写道:
On Sun, Jan 30, 2022 at 5:03 AM Jason Wang <jasowang@redhat.com> wrote:

在 2022/1/22 上午4:27, Eugenio Pérez 写道:
First half of the buffers forwarding part, preparing vhost-vdpa
callbacks to SVQ to offer it. QEMU cannot enable it at this moment, so
this is effectively dead code at the moment, but it helps to reduce
patch size.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
---
   hw/virtio/vhost-shadow-virtqueue.h |   2 +-
   hw/virtio/vhost-shadow-virtqueue.c |  21 ++++-
   hw/virtio/vhost-vdpa.c             | 133 ++++++++++++++++++++++++++---
   3 files changed, 143 insertions(+), 13 deletions(-)

diff --git a/hw/virtio/vhost-shadow-virtqueue.h 
b/hw/virtio/vhost-shadow-virtqueue.h
index 035207a469..39aef5ffdf 100644
--- a/hw/virtio/vhost-shadow-virtqueue.h
+++ b/hw/virtio/vhost-shadow-virtqueue.h
@@ -35,7 +35,7 @@ size_t vhost_svq_device_area_size(const VhostShadowVirtqueue 
*svq);

   void vhost_svq_stop(VhostShadowVirtqueue *svq);

-VhostShadowVirtqueue *vhost_svq_new(void);
+VhostShadowVirtqueue *vhost_svq_new(uint16_t qsize);

   void vhost_svq_free(VhostShadowVirtqueue *vq);

diff --git a/hw/virtio/vhost-shadow-virtqueue.c 
b/hw/virtio/vhost-shadow-virtqueue.c
index f129ec8395..7c168075d7 100644
--- a/hw/virtio/vhost-shadow-virtqueue.c
+++ b/hw/virtio/vhost-shadow-virtqueue.c
@@ -277,9 +277,17 @@ void vhost_svq_stop(VhostShadowVirtqueue *svq)
   /**
    * Creates vhost shadow virtqueue, and instruct vhost device to use the 
shadow
    * methods and file descriptors.
+ *
+ * @qsize Shadow VirtQueue size
+ *
+ * Returns the new virtqueue or NULL.
+ *
+ * In case of error, reason is reported through error_report.
    */
-VhostShadowVirtqueue *vhost_svq_new(void)
+VhostShadowVirtqueue *vhost_svq_new(uint16_t qsize)
   {
+    size_t desc_size = sizeof(vring_desc_t) * qsize;
+    size_t device_size, driver_size;
       g_autofree VhostShadowVirtqueue *svq = g_new0(VhostShadowVirtqueue, 1);
       int r;

@@ -300,6 +308,15 @@ VhostShadowVirtqueue *vhost_svq_new(void)
       /* Placeholder descriptor, it should be deleted at set_kick_fd */
       event_notifier_init_fd(&svq->svq_kick, INVALID_SVQ_KICK_FD);

+    svq->vring.num = qsize;

I wonder if this is the best. E.g some hardware can support up to 32K
queue size. So this will probably end up with:

1) SVQ use 32K queue size
2) hardware queue uses 256

In that case SVQ vring queue size will be 32K and guest's vring can
negotiate any number with SVQ equal or less than 32K,


Sorry for being unclear what I meant is actually

1) SVQ uses 32K queue size

2) guest vq uses 256

This looks like a burden that needs extra logic and may damage the performance.

And this can lead other interesting situation:

1) SVQ uses 256

2) guest vq uses 1024

Where a lot of more SVQ logic is needed.


including 256.
Is that what you mean?


I mean, it looks to me the logic will be much more simplified if we just allocate the shadow virtqueue with the size what guest can see (guest vring).

Then we don't need to think if the difference of the queue size can have any side effects.



If with hardware queues you mean guest's vring, not sure why it is
"probably 256". I'd say that in that case with the virtio-net kernel
driver the ring size will be the same as the device export, for
example, isn't it?

The implementation should support any combination of sizes, but the
ring size exposed to the guest is never bigger than hardware one.

? Or we SVQ can stick to 256 but this will this cause trouble if we want
to add event index support?

I think we should not have any problem with event idx. If you mean
that the guest could mark more buffers available than SVQ vring's
size, that should not happen because there must be less entries in the
guest than SVQ.

But if I understood you correctly, a similar situation could happen if
a guest's contiguous buffer is scattered across many qemu's VA chunks.
Even if that would happen, the situation should be ok too: SVQ knows
the guest's avail idx and, if SVQ is full, it will continue forwarding
avail buffers when the device uses more buffers.

Does that make sense to you?


Yes.

Thanks




reply via email to

[Prev in Thread] Current Thread [Next in Thread]