[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [Qemu-block] [PATCH 6/7] qemu-iotests: 141: reduce like

From: Kevin Wolf
Subject: Re: [Qemu-devel] [Qemu-block] [PATCH 6/7] qemu-iotests: 141: reduce likelihood of race condition on systems with fast IO
Date: Fri, 8 Apr 2016 14:31:15 +0200
User-agent: Mutt/1.5.21 (2010-09-15)

Am 08.04.2016 um 14:01 hat Sascha Silbe geschrieben:
> Dear Max,
> Sascha Silbe <address@hidden> writes:
> > @Max: From a cursory glance at the code, maybe your 1 *byte* per second
> > rate limit is being rounded down to 0 *blocks* per second, with 0
> > meaning no limit? See e.g. mirror_set_speed(). Though I must admit I
> > don't understand how speed=0 translates to unlimited (like
> > qapi/block-core.json:block-job-set-speed says). My understanding of
> > ratelimit_calculate_delay() is that speed=0 means "1 quantum per time
> > slice", with time slice usually being 100ms; not sure about the
> > quantum.
> I think I've understood the issue now.
> The backup, commit, mirror and stream actions operate in on full chunks,
> with chunk size depending on the action and backing device. For
> e.g. commit that means it always bursts at least 0.5MiB; that's where
> the value the reference output comes from.
> ratelimit_calculate_delay() lets through at least one burst per time
> slice. This means the minimum rate is chunk size per time slice (always
> 100ms). So for commit and stream one will always get at least 5 MiB/s. A
> surprisingly large value for something specified in bytes per second,
> BTW. (I.e. it should probably be documented in qmp-commands.hx if it
> stays this way).
> On a busy or slow host, it may take the shell longer than the time slice
> of 100ms to send the cancel command to qemu. When that happens,
> additional chunks will get written before the job gets cancelled. That's
> why I sometimes see 1 or even 1.5 MiB as offset, especially when running
> CPU intensive workloads in parallel.
> The best approach probably would be to fix up the rate limit code to
> delay for multiple time slices if necessary. We should get rid of the
> artificial BDRV_SECTOR_SIZE granularity at the same time, always using
> bytes as the quantum unit.

In the 2.7 time frame we might actually be able to reuse the normal I/O
throttling code for block jobs as the jobs will be using their own
BlockBackend and can therefore set their own throttling limits.


reply via email to

[Prev in Thread] Current Thread [Next in Thread]