qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v5] vhost-user: add multi queue support


From: Marcel Apfelbaum
Subject: Re: [Qemu-devel] [PATCH v5] vhost-user: add multi queue support
Date: Mon, 31 Aug 2015 11:55:01 +0300
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.1.0

On 08/31/2015 08:42 AM, Xu, Qian Q wrote:
+ Jingguo, and he will be working on the vhost multi-queue support functional 
tests.

Marcel,
What you try to is to run dpdk-vhost sample and the dpdk-virtio(virtio-pmd) 
case? Attached the step-by-step guide of running it with one queue, you can 
update the multiple queue commands in it.


Yes. Thanks a lot for the information!
I'll give it a try,
Marcel

Thanks
Qian

-----Original Message-----
From: Ouyang, Changchun
Sent: Monday, August 31, 2015 1:29 PM
To: address@hidden; address@hidden; address@hidden; Xu, Qian Q
Cc: address@hidden; address@hidden; address@hidden; Long, Thomas; Liu, Yuanhan; 
Ouyang, Changchun
Subject: RE: [Qemu-devel] [PATCH v5] vhost-user: add multi queue support


-----Original Message-----
From: Marcel Apfelbaum [mailto:address@hidden
Sent: Sunday, August 30, 2015 11:28 PM
To: Ouyang, Changchun; address@hidden; address@hidden
Cc: address@hidden; address@hidden;
address@hidden; Long, Thomas
Subject: Re: [Qemu-devel] [PATCH v5] vhost-user: add multi queue
support

On 05/28/2015 04:23 AM, Ouyang Changchun wrote:
Based on patch by Nikolay Nikolaev:
Vhost-user will implement the multi queue support in a similar way
to what vhost already has - a separate thread for each queue.
To enable the multi queue functionality - a new command line
parameter "queues" is introduced for the vhost-user netdev.

Signed-off-by: Nikolay Nikolaev <address@hidden>
Signed-off-by: Changchun Ouyang <address@hidden>
Hi,

Can you please share how you did you test it?
Was it the DPDK vhost-user example? (after the MQ patches, of course)
I ask because I tried to test it with no success so far.
Any pointers would be appreciated.


Yes, I use DPDK vhost-user example to test it.
Hope the following link help you a bit.
http://article.gmane.org/gmane.comp.networking.dpdk.devel/22758/match=vhost+multiple

Qian and Yuanhan, maybe you can have more try and share more information here.

Thanks
Changchun

Thanks,
Marcel


---
Changes since v4:
   - remove the unnecessary trailing '\n'

Changes since v3:
   - fix one typo and wrap one long line

Changes since v2:
   - fix vq index issue for set_vring_call
     When it is the case of VHOST_SET_VRING_CALL, The vq_index is not
initialized before it is used,
     thus it could be a random value. The random value leads to crash
in vhost
after passing down
     to vhost, as vhost use this random value to index an array index.
   - fix the typo in the doc and description
   - address vq index for reset_owner

Changes since v1:
   - use s->nc.info_str when bringing up/down the backend

   docs/specs/vhost-user.txt |  5 +++++
   hw/net/vhost_net.c        |  3 ++-
   hw/virtio/vhost-user.c    | 11 ++++++++++-
   net/vhost-user.c          | 37 ++++++++++++++++++++++++-------------
   qapi-schema.json          |  6 +++++-
   qemu-options.hx           |  5 +++--
   6 files changed, 49 insertions(+), 18 deletions(-)

diff --git a/docs/specs/vhost-user.txt b/docs/specs/vhost-user.txt
index 650bb18..2c8e934 100644
--- a/docs/specs/vhost-user.txt
+++ b/docs/specs/vhost-user.txt
@@ -127,6 +127,11 @@ in the ancillary data:
   If Master is unable to send the full message or receives a wrong
reply it
will
   close the connection. An optional reconnection mechanism can be
implemented.

+Multi queue support
+-------------------
+The protocol supports multiple queues by setting all index fields
+in the sent messages to a properly calculated value.
+
   Message types
   -------------

diff --git a/hw/net/vhost_net.c b/hw/net/vhost_net.c index
47f8b89..426b23e 100644
--- a/hw/net/vhost_net.c
+++ b/hw/net/vhost_net.c
@@ -157,6 +157,7 @@ struct vhost_net *vhost_net_init(VhostNetOptions
*options)

       net->dev.nvqs = 2;
       net->dev.vqs = net->vqs;
+    net->dev.vq_index = net->nc->queue_index;

       r = vhost_dev_init(&net->dev, options->opaque,
                          options->backend_type, options->force); @@
-267,7 +268,7 @@ static void vhost_net_stop_one(struct vhost_net *net,
           for (file.index = 0; file.index < net->dev.nvqs; ++file.index) {
               const VhostOps *vhost_ops = net->dev.vhost_ops;
               int r = vhost_ops->vhost_call(&net->dev, VHOST_RESET_OWNER,
-                                          NULL);
+                                          &file);
               assert(r >= 0);
           }
       }
diff --git a/hw/virtio/vhost-user.c b/hw/virtio/vhost-user.c index
e7ab829..d6f2163 100644
--- a/hw/virtio/vhost-user.c
+++ b/hw/virtio/vhost-user.c
@@ -210,7 +210,12 @@ static int vhost_user_call(struct vhost_dev
*dev,
unsigned long int request,
           break;

       case VHOST_SET_OWNER:
+        break;
+
       case VHOST_RESET_OWNER:
+        memcpy(&msg.state, arg, sizeof(struct vhost_vring_state));
+        msg.state.index += dev->vq_index;
+        msg.size = sizeof(m.state);
           break;

       case VHOST_SET_MEM_TABLE:
@@ -253,17 +258,20 @@ static int vhost_user_call(struct vhost_dev
*dev,
unsigned long int request,
       case VHOST_SET_VRING_NUM:
       case VHOST_SET_VRING_BASE:
           memcpy(&msg.state, arg, sizeof(struct vhost_vring_state));
+        msg.state.index += dev->vq_index;
           msg.size = sizeof(m.state);
           break;

       case VHOST_GET_VRING_BASE:
           memcpy(&msg.state, arg, sizeof(struct vhost_vring_state));
+        msg.state.index += dev->vq_index;
           msg.size = sizeof(m.state);
           need_reply = 1;
           break;

       case VHOST_SET_VRING_ADDR:
           memcpy(&msg.addr, arg, sizeof(struct vhost_vring_addr));
+        msg.addr.index += dev->vq_index;
           msg.size = sizeof(m.addr);
           break;

@@ -271,7 +279,7 @@ static int vhost_user_call(struct vhost_dev
*dev,
unsigned long int request,
       case VHOST_SET_VRING_CALL:
       case VHOST_SET_VRING_ERR:
           file = arg;
-        msg.u64 = file->index & VHOST_USER_VRING_IDX_MASK;
+        msg.u64 = (file->index + dev->vq_index) &
+ VHOST_USER_VRING_IDX_MASK;
           msg.size = sizeof(m.u64);
           if (ioeventfd_enabled() && file->fd > 0) {
               fds[fd_num++] = file->fd; @@ -313,6 +321,7 @@ static
int vhost_user_call(struct vhost_dev *dev, unsigned long int request,
                   error_report("Received bad msg size.");
                   return -1;
               }
+            msg.state.index -= dev->vq_index;
               memcpy(arg, &msg.state, sizeof(struct vhost_vring_state));
               break;
           default:
diff --git a/net/vhost-user.c b/net/vhost-user.c index
1d86a2b..904d8af 100644
--- a/net/vhost-user.c
+++ b/net/vhost-user.c
@@ -121,35 +121,39 @@ static void net_vhost_user_event(void *opaque,
int event)
       case CHR_EVENT_OPENED:
           vhost_user_start(s);
           net_vhost_link_down(s, false);
-        error_report("chardev \"%s\" went up", s->chr->label);
+        error_report("chardev \"%s\" went up", s->nc.info_str);
           break;
       case CHR_EVENT_CLOSED:
           net_vhost_link_down(s, true);
           vhost_user_stop(s);
-        error_report("chardev \"%s\" went down", s->chr->label);
+        error_report("chardev \"%s\" went down", s->nc.info_str);
           break;
       }
   }

   static int net_vhost_user_init(NetClientState *peer, const char *device,
-                               const char *name, CharDriverState *chr)
+                               const char *name, CharDriverState *chr,
+                               uint32_t queues)
   {
       NetClientState *nc;
       VhostUserState *s;
+    int i;

-    nc = qemu_new_net_client(&net_vhost_user_info, peer, device,
name);
+    for (i = 0; i < queues; i++) {
+        nc = qemu_new_net_client(&net_vhost_user_info, peer,
+ device, name);

-    snprintf(nc->info_str, sizeof(nc->info_str), "vhost-user to %s",
-             chr->label);
+        snprintf(nc->info_str, sizeof(nc->info_str), "vhost-user%d to %s",
+                 i, chr->label);

-    s = DO_UPCAST(VhostUserState, nc, nc);
+        s = DO_UPCAST(VhostUserState, nc, nc);

-    /* We don't provide a receive callback */
-    s->nc.receive_disabled = 1;
-    s->chr = chr;
-
-    qemu_chr_add_handlers(s->chr, NULL, NULL, net_vhost_user_event,
s);
+        /* We don't provide a receive callback */
+        s->nc.receive_disabled = 1;
+        s->chr = chr;
+        s->nc.queue_index = i;

+        qemu_chr_add_handlers(s->chr, NULL, NULL,
+ net_vhost_user_event,
s);
+    }
       return 0;
   }

@@ -225,6 +229,7 @@ static int net_vhost_check_net(QemuOpts *opts,
void *opaque)
   int net_init_vhost_user(const NetClientOptions *opts, const char *name,
                           NetClientState *peer)
   {
+    uint32_t queues;
       const NetdevVhostUserOptions *vhost_user_opts;
       CharDriverState *chr;

@@ -243,6 +248,12 @@ int net_init_vhost_user(const NetClientOptions
*opts, const char *name,
           return -1;
       }

+    /* number of queues for multiqueue */
+    if (vhost_user_opts->has_queues) {
+        queues = vhost_user_opts->queues;
+    } else {
+        queues = 1;
+    }

-    return net_vhost_user_init(peer, "vhost_user", name, chr);
+    return net_vhost_user_init(peer, "vhost_user", name, chr,
+ queues);
   }
diff --git a/qapi-schema.json b/qapi-schema.json index
f97ffa1..00791dd 100644
--- a/qapi-schema.json
+++ b/qapi-schema.json
@@ -2444,12 +2444,16 @@
   #
   # @vhostforce: #optional vhost on for non-MSIX virtio guests (default:
false).
   #
+# @queues: #optional number of queues to be created for multiqueue
vhost-user
+#          (default: 1) (Since 2.4)
+#
   # Since 2.1
   ##
   { 'struct': 'NetdevVhostUserOptions',
     'data': {
       'chardev':        'str',
-    '*vhostforce':    'bool' } }
+    '*vhostforce':    'bool',
+    '*queues':        'uint32' } }

   ##
   # @NetClientOptions
diff --git a/qemu-options.hx b/qemu-options.hx index
ec356f6..dad035e
100644
--- a/qemu-options.hx
+++ b/qemu-options.hx
@@ -1942,13 +1942,14 @@ The hubport netdev lets you connect a NIC to
a
QEMU "vlan" instead of a single
   netdev.  @code{-net} and @code{-device} with parameter
@option{vlan}
create the
   required hub automatically.

address@hidden -netdev vhost-user,address@hidden,vhostforce=on|off]
address@hidden -netdev
+vhost-user,address@hidden,vhostforce=on|off][,queues=n]

   Establish a vhost-user netdev, backed by a chardev @var{id}. The
chardev
should
   be a unix domain socket backed one. The vhost-user uses a
specifically
defined
   protocol to pass vhost ioctl replacement messages to an
application on the
other
   end of the socket. On non-MSIX guests, the feature can be forced
with address@hidden
address@hidden Use 'address@hidden' to specify the number of
+queues to be created for multiqueue vhost-user.

   Example:
   @example






reply via email to

[Prev in Thread] Current Thread [Next in Thread]