qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] is there a limit on the number of in-flight I/O operati


From: Chris Friesen
Subject: Re: [Qemu-devel] is there a limit on the number of in-flight I/O operations?
Date: Fri, 22 Aug 2014 18:59:38 -0600
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:24.0) Gecko/20100101 Thunderbird/24.5.0

On 07/21/2014 10:10 AM, Benoît Canet wrote:
The Monday 21 Jul 2014 à 09:35:29 (-0600), Chris Friesen wrote :
On 07/21/2014 09:15 AM, Benoît Canet wrote:
The Monday 21 Jul 2014 à 08:59:45 (-0600), Chris Friesen wrote :
On 07/19/2014 02:45 AM, Benoît Canet wrote:

I think in the throttling case the number of in flight operation is limited by
the emulated hardware queue. Else request would pile up and throttling would be
inefective.

So this number should be around: #define VIRTIO_PCI_QUEUE_MAX 64 or something 
like than that.

Okay, that makes sense.  Do you know how much data can be written as part of
a single operation?  We're using 2MB hugepages for the guest memory, and we
saw the qemu RSS numbers jump from 25-30MB during normal operation up to
120-180MB when running dbench.  I'd like to know what the worst-case would

Sorry I didn't understood this part at first read.

In the linux guest can you monitor:
address@hidden:~$ cat /sys/class/block/xyz/inflight ?

This would give us a faily precise number of the requests actually in flight 
between the guest and qemu.


After a bit of a break I'm looking at this again.

While doing "dd if=/dev/zero of=testfile bs=1M count=700" in the guest, I got a max "inflight" value of 181. This seems quite a bit higher than VIRTIO_PCI_QUEUE_MAX.

I've seen throughput as high as ~210 MB/sec, which also kicked the RSS numbers up above 200MB.

I tried dropping VIRTIO_PCI_QUEUE_MAX down to 32 (it didn't seem to work at all for values much less than that, though I didn't bother getting an exact value) and it didn't really make any difference, I saw inflight values as high as 177.

Chris



reply via email to

[Prev in Thread] Current Thread [Next in Thread]