qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] [RFC] aio/async: Add timed bottom-halves


From: Alex Bligh
Subject: Re: [Qemu-devel] [PATCH] [RFC] aio/async: Add timed bottom-halves
Date: Tue, 16 Jul 2013 16:29:53 +0100

Paolo,

--On 16 July 2013 09:34:20 +0200 Paolo Bonzini <address@hidden> wrote:

You did.  But aio_wait() ignores the timeout.  It is only used by the
main loop.

OK well that seems worth fixing in any case, as even without timed bh's
that means no bh can be executed for an indeterminate time. I'll have
a look at that.

No, BHs work because they do aio_notify().  Idle BHs can be skipped for
an indeterminate time, but that's fine because idle BHs are a hack that
we should not need at all.

OK, so a bit more code reading later, I think I now understand.

1. non-idle bh's call aio_notify at schedule time, which will cause
  any poll to exit immediately because at least one FD will be ready.

2. idle bh's do not do aio_notify() because we don't care whether
  they get stuck in aio_poll and they are a hack [actually I think
  we could do better here]

3. aio_poll calls aio_bh_poll. If this returns true, this indicates
  at least one non-idle bh exists, which causes aio_poll not to
  block.

  Question 1: it then calls aio_dispatch - if this itself
  generates non-idle bh's, this would seem not to clear the blocking
  flag. Does this rely on aio_notify?

  Question 2: if we're already telling aio_poll not to block
  by the presence of non-idle bh's as detected in aio_bh_poll,
  why do we need to use aio_notify too? IE why do we have both
  the blocking= logic AND the aio_notify logic?

4. aio_poll then calls g_poll (POSIX) or WaitForMultipleObjects
  (Windows). However, the timeout is either 0 or infinite.
  Both functions take a milliseconds (yuck) timeout, but that
  is not used.

So, the first thing I don't understand is why aio_poll needs the
return value of aio_bh_poll at all. Firstly, after sampling it,
it then causes aio_dispatch, and that can presumably set its own
bottom half callbacks; if this happens 'int blocking' won't be
cleared, and it will still enter g_poll with an infinite timeout.
Secondly, there seems to be an entirely separate mechanism
(aio_notify) in any case. If a non-idle bh has been scheduled,
this will cause g_poll to exit immediately as a read will be
ready. I believe this is cleared by the bh being used.

The second thing I don't understand is why we aren't using
the timeout on g_poll / WaitForMultipleObjects. It would
seem to be reasonably easy to make aio_poll call aio_ctx_prepare
or something that does the same calculation. This would fix
idle bh's to be more reliable (we know it's safe to call them
within aio_poll anyway, it's just a question of whether
we turn an infinite wait into a 10ms wait).

Perhaps these two are related.

I /think/ fixing the second (and removing the aio_notify
from qemu_bh_schedule_at) is sufficient provided it checks
for scheduled bh's immediately prior to the poll. This assumes
other threads cannot schedule bh's. This would seem to be less
intrusive that a TimedEventNotifier approach which (as far as I
can see) requires another thread.

--
Alex Bligh



reply via email to

[Prev in Thread] Current Thread [Next in Thread]