qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 1/1] virtio-blk-ccw: tweak the default for num_queues


From: Michael Mueller
Subject: Re: [PATCH 1/1] virtio-blk-ccw: tweak the default for num_queues
Date: Wed, 11 Nov 2020 13:49:08 +0100
User-agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:68.0) Gecko/20100101 Thunderbird/68.12.1



On 11.11.20 13:38, Cornelia Huck wrote:
On Wed, 11 Nov 2020 13:26:11 +0100
Michael Mueller <mimu@linux.ibm.com> wrote:

On 10.11.20 15:16, Michael Mueller wrote:


On 09.11.20 19:53, Halil Pasic wrote:
On Mon, 9 Nov 2020 17:06:16 +0100
Cornelia Huck <cohuck@redhat.com> wrote:
@@ -20,6 +21,11 @@ static void
virtio_ccw_blk_realize(VirtioCcwDevice *ccw_dev, Error **errp)
   {
       VirtIOBlkCcw *dev = VIRTIO_BLK_CCW(ccw_dev);
       DeviceState *vdev = DEVICE(&dev->vdev);
+    VirtIOBlkConf *conf = &dev->vdev.conf;
+
+    if (conf->num_queues == VIRTIO_BLK_AUTO_NUM_QUEUES) {
+        conf->num_queues = MIN(4, current_machine->smp.cpus);
+    }

I would like to have a comment explaining the numbers here, however.

virtio-pci has a pretty good explanation (use 1:1 for vqs:vcpus if
possible, apply some other capping). 4 seems to be a bit arbitrary
without explanation, although I'm sure you did some measurements :)

Frankly, I don't have any measurements yet. For the secure case,
I think Mimu has assessed the impact of multiqueue, hence adding Mimu to
the cc list. @Mimu can you help us out.

Regarding the normal non-protected VMs I'm in a middle of producing some
measurement data. This was admittedly a bit rushed because of where we
are in the cycle. Sorry to disappoint you.

I'm talking with the perf team tomorrow. They have done some
measurements with multiqueue for PV guests and I asked for a comparison
to non PV guests as well.

The perf team has performed measurements for us that show that a *PV
KVM guest* benefits in terms of throughput for random read, random write
and sequential read (no difference for sequential write) by a multi
queue setup. CPU cost are reduced as well due to reduced spinlock
contention.

Just to be clear, that was with 4 queues?

Yes, we have seen it with 4 and also with 9 queues.

Halil,

still I would like to know what the exact memory consumption per queue
is that you are talking about. Have you made a calculation? Thanks.



For a *standard KVM guest* it currently has no throughput effect. No
benefit and no harm. I have asked them to finalize their measurements
by comparing the CPU cost as well. I will receive that information on
Friday.

Thank you for checking!


Michael



Michael

The number 4 was suggested by Christian, maybe Christian does have some
readily available measurement data for the normal VM case. @Christian:
can you help me out?

Regards,
Halil







reply via email to

[Prev in Thread] Current Thread [Next in Thread]