qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v8 1/4] block: add the block queue support


From: Kevin Wolf
Subject: Re: [Qemu-devel] [PATCH v8 1/4] block: add the block queue support
Date: Tue, 18 Oct 2011 11:56:01 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:7.0) Gecko/20110927 Thunderbird/7.0

Am 18.10.2011 11:29, schrieb Zhi Yong Wu:
>>>>>>> +void qemu_del_block_queue(BlockQueue *queue)
>>>>>>> +{
>>>>>>> +    BlockQueueAIOCB *request, *next;
>>>>>>> +
>>>>>>> +    QTAILQ_FOREACH_SAFE(request, &queue->requests, entry, next) {
>>>>>>> +        QTAILQ_REMOVE(&queue->requests, request, entry);
>>>>>>> +        qemu_aio_release(request);
>>>>>>> +    }
>>>>>>> +
>>>>>>> +    g_free(queue);
>>>>>>> +}
>>>>>>
>>>>>> Can we be sure that no AIO requests are in flight that still use the now
>>>>>> released AIOCB?
>>>>> Yeah, since qemu core code is serially performed, i think that when
>>>>> qemu_del_block_queue is performed, no requests are in flight. Right?
>>>>
>>>> Patch 2 has this code:
>>>>
>>>> +void bdrv_io_limits_disable(BlockDriverState *bs)
>>>> +{
>>>> +    bs->io_limits_enabled = false;
>>>> +
>>>> +    if (bs->block_queue) {
>>>> +        qemu_block_queue_flush(bs->block_queue);
>>>> +        qemu_del_block_queue(bs->block_queue);
>>>> +        bs->block_queue = NULL;
>>>> +    }
>>>>
>>>> Does this mean that you can't disable I/O limits while the VM is running?
>>> NO, you can even though VM is running.
>>
>> Okay, in general qemu_block_queue_flush() empties the queue so that
>> there are no requests left that qemu_del_block_queue() could drop from
>> the queue. So in the common case it doesn't even enter the FOREACH loop.
> I think that we should adopt !QTAILQ_EMPTY(&queue->requests), not
> QTAILQ_FOREACH_SAFE in qemu_del_block_queue(),
> right?

I think QTAILQ_FOREACH_SAFE is fine.

>>
>> I think problems start when requests have failed or exceeded the limit
>> again, then you have requests queued even after
>> qemu_block_queue_flush(). You must be aware of this, otherwise the code
>> in qemu_del_block_queue() wouldn't exist.
>>
>> But you can't release the ACBs without having called their callback,
>> otherwise the caller would still assume that its ACB pointer is valid.
>> Maybe calling the callback before releasing the ACB would be enough.
> Good, thanks.
>>>>
>>>>>>> +            }
>>>>>>> +        }
>>>>>>> +    }
>>>>>>> +
>>>>>>> +    queue->req_failed = true;
>>>>>>> +    queue->flushing   = false;
>>>>>>> +}
>>>>>>> +
>>>>>>> +bool qemu_block_queue_has_pending(BlockQueue *queue)
>>>>>>> +{
>>>>>>> +    return !queue->flushing && !QTAILQ_EMPTY(&queue->requests);
>>>>>>> +}
>>>>>>
>>>>>> Why doesn't the queue have pending requests in the middle of a flush
>>>>>> operation? (That is, the flush hasn't completed yet)
>>>>> It is possible for the queue to have pending requests. if yes, how about?
>>>>
>>>> Sorry, can't parse this.
>>>>
>>>> I don't understand why the !queue->flushing part is correct.
>>
>> What about this?
> When bdrv_aio_readv/writev handle one request, it will determine if
> block queue is not being flushed and isn't NULL; if yes, It assume
> that this request is one new request from upper layer, so it won't
> determine if the I/O rate at runtime has exceeded the limits, but
> immediately insert it into block queue.

Hm, I think I understand what you're saying, but only after looking at
patch 3. This is not really implementing a has_pending(), but
has_pending_and_caller_wasnt_called_during_flush(). I think it would be
better to handle the queue->flushing condition in the caller where its
use is more obvious.

Kevin



reply via email to

[Prev in Thread] Current Thread [Next in Thread]