[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] util/async: make bh_aio_poll() O(1)

From: Paolo Bonzini
Subject: Re: [PATCH] util/async: make bh_aio_poll() O(1)
Date: Wed, 19 Feb 2020 20:05:12 +0100

Il mer 19 feb 2020, 18:58 Stefan Hajnoczi <address@hidden> ha scritto:
On Wed, Feb 19, 2020 at 12:09:48PM +0100, Paolo Bonzini wrote:
> Really a great idea, though I have some remarks on the implementation below.
> On 19/02/20 11:00, Stefan Hajnoczi wrote:
> > + * Each aio_bh_poll() call carves off a slice of the BH list.  This way newly
> > + * scheduled BHs are not processed until the next aio_bh_poll() call.  This
> > + * concept extends to nested aio_bh_poll() calls because slices are chained
> > + * together.
> This is the tricky part so I would expand a bit on why it's needed:
> /*
>  * Each aio_bh_poll() call carves off a slice of the BH list, so that
>  * newly scheduled BHs are not processed until the next aio_bh_poll()
>  * call.  All active aio_bh_poll() calls chained their slices together
>  * in a list, so that nested aio_bh_poll() calls process all scheduled
>  * bottom halves.
>  */

Thanks, will fix in v2.

> > +struct BHListSlice {
> > +    QEMUBH *first_bh;
> > +    BHListSlice *next;
> > +};
> > +
> Using QLIST and QSLIST removes the need to create your own lists, since
> struct BHListSlice {
>     QSLIST_HEAD(, QEMUBH) first_bh;
>     QLIST_ENTRY(BHListSlice) next;
> };
> ...
>     QSLIST_HEAD(, QEMUBH) active_bh;
>     QLIST_HEAD(, BHListSlice) bh_list;

I thought about this but chose the explicit tail pointer approach
because it lets aio_compute_timeout() and aio_ctx_check() iterate over
both the active BH list and slices in a single for loop :).  But
thinking about it more, maybe it can still be done by replacing
active_bh with a permanently present first BHListSlice element.

Probably not so easy since the idea was to empty the slices list entirely via the FIFO order.

But since you are rewriting everything anyway, can you add a flag for whether there are any non-idle bottom halves anywhere in the list? It need not be computed perfectly, because any non-idle bh will cause the idle bottom halves to be triggered as well; you can just set in qemu_bh_schedule and clear it at the end of aio_bh_poll.

Then if there is any active bh or slice you consult the flag and use it to return the timeout, which will be either 0 or 10ms depending on the flag. That's truly O(1). (More precisely, this patch goes from O(#created-bh) to O(#scheduled-bh), which of course is optimal for aio_bh_poll but not for aio_compute_timeout; making timeout computation O(1) can lower latency a bit by decreasing the constant factor).


> Related to this, in the aio_bh_poll() loop:
>     for (s = ctx->bh_list.next; s; s = s->next) {
>     }
> You can actually do the removal inside the loop.  This is slightly more
> efficient since you can remove slices early from the nested
> aio_bh_poll().  Not that it's relevant for performance, but I think the
> FIFO order for slices is also more intuitive than LIFO.
> Putting this idea together with the QLIST one, you would get:
>     /*
>      * If a bottom half causes a recursive call, this slice will be
>      * removed by the nested aio_bh_poll().
>      */
>     QSLIST_MOVE_ATOMIC(&slice.first_bh, ctx->active_bh);
>     QLIST_INSERT_TAIL(&ctx->bh_list, slice);
>     while ((s = QLIST_FIRST(&ctx->bh_list)) {
>         while ((bh = aio_bh_dequeue(&s, &flags))) {
>         }
>         QLIST_REMOVE_HEAD(s, next);
>     }

Cool, reusing "queue.h" is nice.

> >  /* Multiple occurrences of aio_bh_poll cannot be called concurrently.
> >   * The count in ctx->list_lock is incremented before the call, and is
> >   * not affected by the call.
> The second sentence is now stale.

Thanks, will fix in v2.


reply via email to

[Prev in Thread] Current Thread [Next in Thread]