[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC] block: Removed coroutine ownership assumption

From: Peter Crosthwaite
Subject: Re: [Qemu-devel] [RFC] block: Removed coroutine ownership assumption
Date: Mon, 2 Jul 2012 19:42:49 +1000

On Mon, Jul 2, 2012 at 7:04 PM, Kevin Wolf <address@hidden> wrote:
> Am 02.07.2012 10:57, schrieb Peter Crosthwaite:
>> On Mon, Jul 2, 2012 at 6:50 PM, Stefan Hajnoczi <address@hidden> wrote:
>>> On Fri, Jun 29, 2012 at 12:51 PM, Peter Crosthwaite
>>> <address@hidden> wrote:
>>>> BTW Yielding is one thing, but the elephant in the room here is
>>>> resumption of the coroutine. When AIO yields my coroutine I i need to
>>>> talk to AIO to get it unyielded (Stefans propsoed edit to my code).
>>>> What happens when tommorow something in QOM, or a device model or  is
>>>> implemented with coros too? how do I know who yielded my routines and
>>>> what API to call re-enter it?
>>> Going back to what Kevin said, the qemu_aio_wait() isn't block layer
>>> specific and there will never be a requirement to call any other magic
>>> functions.
>>> QEMU is event-driven and you must pump events.  If you call into
>>> another subsystem, be prepared to pump events so that I/O can make
>>> progress.  It's an assumption that is so central to QEMU architecture
>>> that I don't see it as a problem.
>>> Once the main loop is running then the event loop is taken care of for
>>> you.  But during startup you're in a different environment and need to
>>> pump events yourself.
>>> If we drop bdrv_read()/bdrv_write() this won't change.  You'll have to
>>> call bdrv_co_readv()/bdrv_co_writev() and pump events.
>> Just tracing bdrv_aio_read(), It bypasses the fastpath logic:
>> . So a conversion of pflash to bdrv_aio_readv is a possible solution here.
>> bdrv_aio_read -> bdrv_co_aio_rw_vector :
>> static BlockDriverAIOCB *bdrv_co_aio_rw_vector (..) {
>>     Coroutine *co;
>>     BlockDriverAIOCBCoroutine *acb;
>>     ...
>>     co = qemu_coroutine_create(bdrv_co_do_rw);
>>     qemu_coroutine_enter(co, acb);
>>     return &acb->common;
>> }
>> No conditional on the qemu_coroutine_create. So it will always create
>> a new coroutine for its work which will solve my problem. All I need
>> to do is pump events once at the end of machine model creation. Any my
>> coroutines will never yield or get queued by block/AIO. Sound like a
>> solution?
> If you don't need the read data in your initialisation code,

definately not :) Just as long as the read data is there by the time
the machine goes live.  Whats the current policy with bdrv_read()ing
from init functions anyway? Several devices in qemu have init
functions that read the entire storage into a buffer (then the guest
just talks to the buffer rather than the backing store).

Pflash (pflash_cfi_01.c) is the device that is causing me interference
here and it works exactly like this. If we make the bdrv_read() aio
though, how do you ensure that it has completed before the guest talks
to the device? Will this just happen at the end of machine_init
anyways? Can we put a one liner in the machine init framework that
pumps all AIO events then just mass covert all these bdrv_reads (in
init functions) to bdrv_aio_read with a nop completion callback?

then yes,
> that would work. bdrv_aio_* will always create a new coroutine. I just
> assumed that you wanted to use the data right away, and then using the
> AIO functions wouldn't have made much sense.

You'd get a small performance increase no? Your machine init continues
on while your I/O happens rather than being synchronous so there is
motivation beyond my situation.


> Kevin

reply via email to

[Prev in Thread] Current Thread [Next in Thread]