[Top][All Lists]
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Qemu-devel] [RFC PATCH 00/17] Support for multiple "AIO contexts"
From: |
Kevin Wolf |
Subject: |
Re: [Qemu-devel] [RFC PATCH 00/17] Support for multiple "AIO contexts" |
Date: |
Wed, 26 Sep 2012 16:31:03 +0200 |
User-agent: |
Mozilla/5.0 (X11; Linux x86_64; rv:13.0) Gecko/20120605 Thunderbird/13.0 |
Am 26.09.2012 15:32, schrieb Paolo Bonzini:
> Il 26/09/2012 14:28, Kevin Wolf ha scritto:
>> Do you have a git tree where I could see what things would look like in
>> the end?
>
> I will push it to aio-context on git://github.com/bonzini/qemu.git as
> soon as github comes back.
>
>> I wonder how this relates to my plans of getting rid of qemu_aio_flush()
>> and friends in favour of BlockDriver.bdrv_drain().
>
> Mostly unrelated, I think. The introduction of the non-blocking
> aio_poll in this series might help implementing bdrv_drain, like this:
>
> blocking = false;
> while(bs has requests) {
> progress = aio_poll(aio context of bs, blocking);
> if (progress) {
> blocking = false;
> continue;
> }
> if (bs has throttled requests) {
> restart throttled requests
> blocking = false;
> continue;
> }
>
> /* No progress, must have been non-blocking. We must wait. */
> assert(!blocking);
> blocking = true;
> }
Yes, possibly.
> BTW, is it true that "bs->file has requests || bs->backing_hd has
> requests" (or any other underlying file, like vmdk extents) implies "bs
> has requests"?
I think each block driver is responsible for draining the requests that
it sent. This means that it will drain bs->file (because noone else
should directly go there) and in most cases also bs->backing_hd, but if
for example live commit has a request in flight that directly accesses
the backing file, I wouldn't expect that a block driver is required to
wait for the completion of this request.
>> In fact, after removing io_flush, I don't really see what makes AIO
>> fd handlers special any more.
>
> Note that while the handlers aren't that special indeed, there is still
> some magic because qemu_aio_wait() bottom halves.
Do you mean the qemu_bh_poll() call? But the normal main loop does the
same, so I don't see what would be special about it.
>> qemu_aio_wait() only calls these handlers, but would it do any harm if
>> we called all fd handlers?
>
> Unfortunately yes. You could get re-entrant calls from the monitor
> while a monitor command drains the AIO queue for example.
Hm, that's true... Who's special here - is it the block layer or the
monitor? I'm not quite sure. If it's the monitor, maybe we should plan
to change that sometime when we have some spare time... ;-)
Kevin
- [Qemu-devel] [PATCH 16/17] aio: clean up now-unused functions, (continued)
- [Qemu-devel] [PATCH 16/17] aio: clean up now-unused functions, Paolo Bonzini, 2012/09/25
- [Qemu-devel] [PATCH 15/17] main-loop: use aio_notify for qemu_notify_event, Paolo Bonzini, 2012/09/25
- [Qemu-devel] [PATCH 17/17] linux-aio: use event notifiers, Paolo Bonzini, 2012/09/25
- [Qemu-devel] [PATCH 11/17] aio: make AioContexts GSources, Paolo Bonzini, 2012/09/25
- Re: [Qemu-devel] [RFC PATCH 00/17] Support for multiple "AIO contexts", Kevin Wolf, 2012/09/26