qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 05/21] block: Introduce bdrv_schedule_unref()


From: Eric Blake
Subject: Re: [PATCH 05/21] block: Introduce bdrv_schedule_unref()
Date: Fri, 18 Aug 2023 11:26:28 -0500
User-agent: NeoMutt/20230517

On Fri, Aug 18, 2023 at 11:24:00AM -0500, Eric Blake wrote:
> > +++ b/block/graph-lock.c
> > @@ -163,17 +163,26 @@ void bdrv_graph_wrlock(BlockDriverState *bs)
> >  void bdrv_graph_wrunlock(void)
> >  {
> >      GLOBAL_STATE_CODE();
> > -    QEMU_LOCK_GUARD(&aio_context_list_lock);
> >      assert(qatomic_read(&has_writer));
> >  
> > +    WITH_QEMU_LOCK_GUARD(&aio_context_list_lock) {
> > +        /*
> > +         * No need for memory barriers, this works in pair with
> > +         * the slow path of rdlock() and both take the lock.
> > +         */
> > +        qatomic_store_release(&has_writer, 0);
> > +
> > +        /* Wake up all coroutine that are waiting to read the graph */
> > +        qemu_co_enter_all(&reader_queue, &aio_context_list_lock);
> 
> So if I understand coroutines correctly, this says all pending
> coroutines are now scheduled to run (or maybe they do try to run here,
> but then immediately return control back to this coroutine to await
> the right lock conditions since we are still in the block guarded by
> list_lock)...
> 
> > +    }
> > +
> >      /*
> > -     * No need for memory barriers, this works in pair with
> > -     * the slow path of rdlock() and both take the lock.
> > +     * Run any BHs that were scheduled during the wrlock section and that
> > +     * callers might expect to have finished (e.g. bdrv_unref() calls). Do 
> > this
> > +     * only after restarting coroutines so that nested event loops in BHs 
> > don't
> > +     * deadlock if their condition relies on the coroutine making progress.
> >       */
> > -    qatomic_store_release(&has_writer, 0);
> > -
> > -    /* Wake up all coroutine that are waiting to read the graph */
> > -    qemu_co_enter_all(&reader_queue, &aio_context_list_lock);
> > +    aio_bh_poll(qemu_get_aio_context());
> 
> ...and as long as the other coroutines sharing this thread don't
> actually get to make progress until the next point at which the
> current coroutine yields, and as long as our aio_bh_poll() doesn't
> also include a yield point, then we are ensured that the BH has
> completed before the next yield point in our caller.
> 
> There are times (like today) where I'm still trying to wrap my mind
> about the subtle differences between true multi-threading
> vs. cooperative coroutines sharing a single thread via the use of
> yield points.  coroutines are cool when they can get rid of some of
> the races that you have to worry about in true multi-threading.

That said, once we introduce multi-queue, can we ever have a scenario
where a different iothread might be trying to access the graph and
perform a reopen in the time while this thread has not completed the
BH close?  Or is that protected by some other mutual exclusion (where
the only one we have to worry about is reopen by a coroutine in the
same thread, because all other threads are locked out of graph
modifications)?

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.
Virtualization:  qemu.org | libguestfs.org




reply via email to

[Prev in Thread] Current Thread [Next in Thread]