[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Qemu-block] [PATCH 1/1] Improve block job rate limiting for small b
From: |
Max Reitz |
Subject: |
Re: [Qemu-block] [PATCH 1/1] Improve block job rate limiting for small bandwidth values |
Date: |
Tue, 5 Jul 2016 20:27:46 +0200 |
User-agent: |
Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.1.1 |
On 05.07.2016 20:06, Sascha Silbe wrote:
> Dear Max,
>
> Max Reitz <address@hidden> writes:
>
>> [ Good signature by key: 0x58B381CE2DC89CF99730EE643BB14202E838ACAD ]
>
> Feel free to drop by if you happen to be in the Stuttgart area some
> time. PGP key signing, a beverage of your choice and optionally some
> chatting about qemu and related topics. :)
Happens rarely, but does happen. The Red Hat office I'm associated with
actually is in Stuttgart, but most of the time I live (and work) 500 km
away from it.
>> On 04.07.2016 16:30, Sascha Silbe wrote:
>>> Max Reitz <address@hidden> writes:
> [...]
> [include/qemu/ratelimit.h]
>>>>> static inline int64_t ratelimit_calculate_delay(RateLimit *limit,
>>>>> uint64_t n)
>>>>> {
>>>>> int64_t now = qemu_clock_get_ns(QEMU_CLOCK_REALTIME);
>>>>> + uint64_t delay_slices;
>>>>>
>>>>> - if (limit->next_slice_time < now) {
>>>>> - limit->next_slice_time = now + limit->slice_ns;
>>>>> + assert(limit->slice_quota && limit->slice_ns);
>>>>> +
>>>>> + if (limit->slice_end_time < now) {
>>>>> + /* Previous, possibly extended, time slice finished; reset the
>>>>> + * accounting. */
>>>>> + limit->slice_start_time = now;
>>>>> + limit->slice_end_time = now + limit->slice_ns;
>>>>> limit->dispatched = 0;
>>>>> }
>>>>> - if (limit->dispatched == 0 || limit->dispatched + n <=
>>>>> limit->slice_quota) {
>>>>> - limit->dispatched += n;
>>>>> +
>>>>> + limit->dispatched += n;
>>>>> + if (limit->dispatched < limit->slice_quota) {
>>>>
>>>> Nitpick: This should probably stay <=.
>>>
>>> This is a subtle edge case. Previously, when limit->dispatched ==
>>> limit->slice_quota, we returned 0 so that the _current_ request (which
>>> is still within quota) wouldn't be delayed. Now, we return a delay so
>>> that the _next_ request (which would be over quota) will be delayed.
>>
>> Hm, but that depends on the size of the next request. Of course, if we
>> get limit->dispatched == limit->slice_quota we know for sure that we
>> need to delay the next request. But if we get limit->dispatched ==
>> limit->slice_quota - 1... Then we probably also have to delay it, but we
>> don't know for sure.
>
> No matter where exactly we draw the line, due to the way the block job
> rate limiting works (fixed size time slices, fixed size requests) there
> will always be cases where we're off the target rate quite a bit, in one
> or the other direction.
>
> For rate limits where we can send an integer number of chunks per time
> slice (i.e. some MiB/s sized value), the "<" condition is probably
> better. We'll send out a couple of chunks that exactly match the quota,
> then sleep for the rest of the time slice. If we'd use "<=", we'd send
> out one extra chunk before we start sleeping.
>
> But I don't care much either way, "<=" is fine with me, too.
Well, and above all, we'll hopefully replace all of this in 2.8 anyway.
Max
signature.asc
Description: OpenPGP digital signature