qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v6 5/5] qemu-iotests: Add 093 for IO throttling


From: Fam Zheng
Subject: Re: [Qemu-devel] [PATCH v6 5/5] qemu-iotests: Add 093 for IO throttling
Date: Thu, 29 Jan 2015 10:06:16 +0800
User-agent: Mutt/1.5.23 (2014-03-12)

On Thu, 01/29 08:53, Fam Zheng wrote:
> On Wed, 01/28 11:22, Max Reitz wrote:
> > On 2015-01-27 at 21:28, Fam Zheng wrote:
> > >This case utilizes qemu-io command "aio_{read,write} -q" to verify the
> > >effectiveness of IO throttling options.
> > >
> > >It's implemented by driving the vm timer from qtest protocol, so the
> > >throttling timers are signaled with determinied time duration. Then we
> > >verify the completed IO requests are within 10% error of bps and iops
> > >limits.
> > >
> > >"null" protocol is used as the disk backend so that no actual disk IO is
> > >performed on host, this will make the blockstats much more
> > >deterministic. Both "null-aio" and "null-co" are covered, which is also
> > >a simple cross validation test for the driver code.
> > >
> > >Signed-off-by: Fam Zheng <address@hidden>
> > >---
> > >  tests/qemu-iotests/093     | 120 
> > > +++++++++++++++++++++++++++++++++++++++++++++
> > >  tests/qemu-iotests/093.out |   5 ++
> > >  tests/qemu-iotests/group   |   1 +
> > >  3 files changed, 126 insertions(+)
> > >  create mode 100755 tests/qemu-iotests/093
> > >  create mode 100644 tests/qemu-iotests/093.out
> > >
> > >diff --git a/tests/qemu-iotests/093 b/tests/qemu-iotests/093
> > >new file mode 100755
> > >index 0000000..2866536
> > >--- /dev/null
> > >+++ b/tests/qemu-iotests/093
> > >@@ -0,0 +1,120 @@
> > >+#!/usr/bin/env python
> > >+#
> > >+# Tests for IO throttling
> > >+#
> > >+# Copyright (C) 2015 Red Hat, Inc.
> > >+#
> > >+# This program is free software; you can redistribute it and/or modify
> > >+# it under the terms of the GNU General Public License as published by
> > >+# the Free Software Foundation; either version 2 of the License, or
> > >+# (at your option) any later version.
> > >+#
> > >+# This program is distributed in the hope that it will be useful,
> > >+# but WITHOUT ANY WARRANTY; without even the implied warranty of
> > >+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> > >+# GNU General Public License for more details.
> > >+#
> > >+# You should have received a copy of the GNU General Public License
> > >+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
> > >+#
> > >+
> > >+import iotests
> > >+
> > >+class ThrottleTestCase(iotests.QMPTestCase):
> > >+    test_img = "null-aio://"
> > >+
> > >+    def blockstats(self, device):
> > >+        result = self.vm.qmp("query-blockstats")
> > >+        for r in result['return']:
> > >+            if r['device'] == device:
> > >+                stat = r['stats']
> > >+                return stat['rd_bytes'], stat['rd_operations'], 
> > >stat['wr_bytes'], stat['wr_operations']
> > >+        raise Exception("Device not found for blockstats: %s" % device)
> > >+
> > >+    def setUp(self):
> > >+        self.vm = iotests.VM().add_drive(self.test_img)
> > >+        self.vm.launch()
> > >+
> > >+    def tearDown(self):
> > >+        self.vm.shutdown()
> > >+
> > >+    def do_test_throttle(self, seconds, params):
> > >+        def check_limit(limit, num):
> > >+            # IO throttling algorithm is discrete, allow 10% error so the 
> > >test
> > >+            # is more robust
> > >+            return limit == 0 or \
> > >+                   (num < seconds * limit * 1.1
> > >+                   and num > seconds * limit * 0.9)
> > >+
> > >+        nsec_per_sec = 1000000000
> > >+
> > >+        params['device'] = 'drive0'
> > >+
> > >+        result = self.vm.qmp("block_set_io_throttle", conv_keys=False, 
> > >**params)
> > >+        self.assert_qmp(result, 'return', {})
> > >+
> > >+        # Set vm clock to a known value
> > >+        ns = seconds * nsec_per_sec
> > >+        self.vm.qtest("clock_step %d" % ns)
> > >+
> > >+        # Submit enough requests. They will drain bps_max and iops_max, 
> > >but the
> > >+        # rest requests won't get executed until we advance the virtual 
> > >clock
> > >+        # with qtest interface
> > >+        rq_size = 512
> > >+        rd_nr = max(params['bps'] / rq_size / 2,
> > >+                    params['bps_rd'] / rq_size,
> > >+                    params['iops'] / 2,
> > >+                    params['iops_rd']) + \
> > >+                params['bps_max'] / rq_size / 2 + \
> > >+                params['iops_max']
> > 
> > I guess the divisions by two are because those values represent read and
> > write operations combined. Shouldn't iops_max be divided by two, too, then?
> > 
> > >+        rd_nr *= seconds * 2
> > >+        wr_nr = max(params['bps'] / rq_size / 2,
> > >+                    params['bps_wr'] / rq_size,
> > >+                    params['iops'] / 2,
> > >+                    params['iops_wr']) + \
> > >+                params['bps_max'] / rq_size / 2 + \
> > >+                params['iops_max']
> > >+        wr_nr *= seconds * 2
> > >+        for i in range(rd_nr):
> > >+            self.vm.hmp_qemu_io("drive0", "aio_read %d %d" % (i * 
> > >rq_size, rq_size))
> > >+        for i in range(wr_nr):
> > >+            self.vm.hmp_qemu_io("drive0", "aio_write %d %d" % (i * 
> > >rq_size, rq_size))
> > >+
> > >+        start_rd_bytes, start_rd_iops, start_wr_bytes, start_wr_iops = 
> > >self.blockstats('drive0')
> > >+
> > >+        self.vm.qtest("clock_step %d" % ns)
> > >+        end_rd_bytes, end_rd_iops, end_wr_bytes, end_wr_iops = 
> > >self.blockstats('drive0')
> > >+
> > >+        rd_bytes = end_rd_bytes - start_rd_bytes
> > >+        rd_iops = end_rd_iops - start_rd_iops
> > >+        wr_bytes = end_wr_bytes - start_wr_bytes
> > >+        wr_iops = end_wr_iops - start_wr_iops
> > >+
> > >+        self.assertTrue(check_limit(params['bps'], rd_bytes + wr_bytes))
> > >+        self.assertTrue(check_limit(params['bps_rd'], rd_bytes))
> > >+        self.assertTrue(check_limit(params['bps_wr'], wr_bytes))
> > >+        self.assertTrue(check_limit(params['iops'], rd_iops + wr_iops))
> > >+        self.assertTrue(check_limit(params['iops_rd'], rd_iops))
> > >+        self.assertTrue(check_limit(params['iops_wr'], wr_iops))
> > 
> > Hm, you're not checking bps_max and iops_max here. Should you be?
> 
> I never really liked these two parameters, but now that you asked, probably
> yes (to this question and above). :)
> 
> Fam

OK, messed for some time with *_max here and I'm giving up:

/* fix bucket parameters */
static void throttle_fix_bucket(LeakyBucket *bkt)
{
    double min;

    /* zero bucket level */
    bkt->level = 0;

    /* The following is done to cope with the Linux CFQ block scheduler
     * which regroup reads and writes by block of 100ms in the guest.
     * When they are two process one making reads and one making writes cfq
     * make a pattern looking like the following:
     * WWWWWWWWWWWRRRRRRRRRRRRRRWWWWWWWWWWWWWwRRRRRRRRRRRRRRRRR
     * Having a max burst value of 100ms of the average will help smooth the
     * throttling
     */
    min = bkt->avg / 10;
    if (bkt->avg && !bkt->max) {
        bkt->max = min;
    }
}

This is some magic that cannot be tested.  So are you happy with this version?

Fam



reply via email to

[Prev in Thread] Current Thread [Next in Thread]