[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-block] Why qemu processes can bypass cgroup's blkio.weight ?

From: Bob Chen
Subject: Re: [Qemu-block] Why qemu processes can bypass cgroup's blkio.weight ?
Date: Tue, 16 Feb 2016 18:16:47 +0800

I used dd if=/zero of=/dev/vdc1 oflag=direct bs=1M to fill the qcow2 file before starting the read test.

I have also observed iotop benchmark on host, the two qemu processes have the same throughput.

Besides, I tried the write test as well, the same results. But according some blogs I found on the Internet, qemu write might be affected by write cache. So my tests were mainly focused on read.

2016-02-16 0:04 GMT+08:00 Stefan Hajnoczi <address@hidden>:
On Mon, Feb 15, 2016 at 04:57:02PM +0800, Bob Chen wrote:
> > On Fri, Jan 22, 2016 at 10:57:29AM +0800, Bob Chen wrote:
> > > I want to achieve proportional IO sharing by using cgroup.
> > >
> > > My qemu config is:     -drive
> > > file=$DISKFILe,if=none,format=qcow2,cache=none,aio=native -device
> > > virtio-blk-pci...
> > >
> > > Test command inside vm is:     dd if=/dev/vdc of=/dev/null iflag=direct

Host blkio controller does not "see" I/O requests that are satisfied
internally by QEMU without submitting host I/O requests.

Is it possible that your dd benchmark is reading lots of unallocated
zero regions from the qcow2 file?

In that case no host disk I/O is taking place so the blkio controller
doesn't come into play even though the guest thinks a lot of I/O is
taking place.  You may notice that the reads are very fast.  That is
because QEMU just checks the qcow2 L1/L2 table and decides the blocks
are filled with zeroes so no I/O is necessary.

When comparing blkio controller, don't trust the guest benchmark stats.
Use iostat(1) on the host to measure throughput and iops.


reply via email to

[Prev in Thread] Current Thread [Next in Thread]