qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v1 1/1] Submit the codes for QEMU disk I/O limit


From: Zhi Yong Wu
Subject: Re: [Qemu-devel] [PATCH v1 1/1] Submit the codes for QEMU disk I/O limits.
Date: Mon, 25 Jul 2011 14:55:12 +0800

On Mon, Jul 25, 2011 at 1:40 PM, Stefan Hajnoczi <address@hidden> wrote:
> On Mon, Jul 25, 2011 at 5:25 AM, Zhi Yong Wu <address@hidden> wrote:
>> On Fri, Jul 22, 2011 at 6:54 PM, Stefan Hajnoczi <address@hidden> wrote:
>>> On Fri, Jul 22, 2011 at 10:20 AM, Zhi Yong Wu <address@hidden> wrote:
>>>> +static void bdrv_block_timer(void *opaque)
>>>> +{
>>>> +    BlockDriverState *bs = opaque;
>>>> +    BlockQueue *queue = bs->block_queue;
>>>> +    uint64_t intval = 1;
>>>> +
>>>> +    while (!QTAILQ_EMPTY(&queue->requests)) {
>>>> +        BlockIORequest *request;
>>>> +        int ret;
>>>> +
>>>> +        request = QTAILQ_FIRST(&queue->requests);
>>>> +        QTAILQ_REMOVE(&queue->requests, request, entry);
>>>> +
>>>> +        ret = queue->handler(request);
>>>
>>> Please remove the function pointer and call qemu_block_queue_handler()
>>> directly.  This indirection is not needed and makes it harder to
>>> follow the code.
>> Should it keep the same style with other queue implemetations such as
>> network queue? As you have known, network queue has one queue handler.
>
> You mean net/queue.c:queue->deliver?  There are two deliver functions,
yeah
> qemu_deliver_packet() and qemu_vlan_deliver_packet(), which is why a
> function pointer is necessary.
OK, The handler has been removed and invoked directly.
>
> In this case there is only one handler function so the indirection is
> not necessary.
>
>>>
>>>> +        if (ret == 0) {
>>>> +            QTAILQ_INSERT_HEAD(&queue->requests, request, entry);
>>>> +            break;
>>>> +        }
>>>> +
>>>> +        qemu_free(request);
>>>> +    }
>>>> +
>>>> +    qemu_mod_timer(bs->block_timer, (intval * 1000000000) + 
>>>> qemu_get_clock_ns(vm_clock));
>>>
>>> intval is always 1.  The timer should be set to the next earliest deadline.
>> pls see bdrv_aio_readv/writev:
>> +    /* throttling disk read I/O */
>> +    if (bs->io_limits != NULL) {
>> +       if (bdrv_exceed_io_limits(bs, nb_sectors, false, &wait_time)) {
>> +            ret = qemu_block_queue_enqueue(bs->block_queue, bs, 
>> bdrv_aio_readv,
>> +                               sector_num, qiov, nb_sectors, cb, opaque);
>> +           qemu_mod_timer(bs->block_timer, wait_time +
>> qemu_get_clock_ns(vm_clock));
>>
>> The timer has been modified when the blk request is enqueued.
>
> The current algorithm seems to be:
> 1. Queue requests that exceed the limit and set the timer.
> 2. Dequeue all requests when the timer fires.
Yeah, but for each dequeued requests, it will be still determined if
current I/O rate exceed that limits, if yes, it will still be enqueued
into block queue again.
> 3. Set 1s periodic timer.
>
> Why is the timer set up as a periodic 1 second timer in bdrv_block_timer()?
I was aslo considering if we need to set up this type of timer before.
Since the timer has been modified after qemu_block_queue_enqueue()
function, this timer should be not modified here. I will remove this
line of line. thanks.

>
>>>> +        bs->slice_start[is_write] = real_time;
>>>> +
>>>> +        bs->io_disps->bytes[is_write] = 0;
>>>> +        if (bs->io_limits->bps[2]) {
>>>> +            bs->io_disps->bytes[!is_write] = 0;
>>>> +        }
>>>
>>> All previous data should be discarded when a new time slice starts:
>>> bs->io_disps->bytes[IO_LIMIT_READ] = 0;
>>> bs->io_disps->bytes[IO_LIMIT_WRITE] = 0;
>> If only bps_rd is specified, bs->io_disps->bytes[IO_LIMIT_WRITE] will
>> never be used; i think that it should not be cleared here. right?
>
> I think there is no advantage in keeping slices separate for
> read/write.  The code would be simpler and work the same if there is
> only one slice and past history is cleared when a new slice starts.
Done
>
> Stefan
>



-- 
Regards,

Zhi Yong Wu



reply via email to

[Prev in Thread] Current Thread [Next in Thread]