[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v2] block: Fix bdrv_drain in coroutine

From: Fam Zheng
Subject: Re: [Qemu-devel] [PATCH v2] block: Fix bdrv_drain in coroutine
Date: Tue, 5 Apr 2016 19:15:14 +0800
User-agent: Mutt/1.5.21 (2010-09-15)

On Tue, 04/05 10:39, Stefan Hajnoczi wrote:
> > > block/mirror.c should call bdrv_co_drain() explicitly and bdrv_drain()
> > > should assert(!qemu_in_coroutine()).
> > > 
> > > The reason why existing bdrv_read() and friends detect coroutine context
> > > at runtime is because it is meant for legacy code that runs in both
> > > coroutine and non-coroutine contexts.
> > 
> > blk_unref/bdrv_unref are called in coroutine (.bdrv_create implementations),
> > and this doesn't just work with the assertion. Should I clean up this 
> > "legacy"
> > code first, i.e. move bdrv_unref calls to BHs in the callers and
> > assert(!qemu_in_coroutine()) there too? I didn't think this because it
> > complicates the code somehow.
> This is a messy problem.
> In general I don't like introducing yields into non-coroutine_fn
> functions because it can lead to bugs when the caller didn't expect a
> yield point.
> For example, I myself wouldn't have expected bdrv_unref() to be a yield
> point.  So maybe coroutine code I've written would be vulnerable to
> interference (I won't call it a race condition) from another coroutine
> across the bdrv_unref() call.  This could mean that another coroutine
> now sees intermediate state that would never be visible without the new
> yield point.
> I think attempting to invoke qemu_co_queue_run_restart() directly
> instead of scheduling a BH and yielding does not improve the situation.
> It's also a layering violation since qemu_co_queue_run_restart() is just
> meant for the core coroutine code and isn't a public interface.
> Anyway, let's consider bdrv_drain() legacy code that can call if
> (qemu_in_coroutine()) but please make bdrv_co_drain() public so
> block/mirror.c can at least call it directly.

OK, will do.


reply via email to

[Prev in Thread] Current Thread [Next in Thread]