qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 2/2] thread-pool: use ThreadPool from the running thread


From: Stefan Hajnoczi
Subject: Re: [PATCH 2/2] thread-pool: use ThreadPool from the running thread
Date: Mon, 24 Oct 2022 14:49:47 -0400

On Thu, Oct 20, 2022 at 05:22:17PM +0100, Dr. David Alan Gilbert wrote:
> * Stefan Hajnoczi (stefanha@redhat.com) wrote:
> > On Mon, Oct 03, 2022 at 10:52:33AM +0200, Emanuele Giuseppe Esposito wrote:
> > > 
> > > 
> > > Am 30/09/2022 um 17:45 schrieb Kevin Wolf:
> > > > Am 30.09.2022 um 14:17 hat Emanuele Giuseppe Esposito geschrieben:
> > > >> Am 29/09/2022 um 17:30 schrieb Kevin Wolf:
> > > >>> Am 09.06.2022 um 15:44 hat Emanuele Giuseppe Esposito geschrieben:
> > > >>>> Remove usage of aio_context_acquire by always submitting work items
> > > >>>> to the current thread's ThreadPool.
> > > >>>>
> > > >>>> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> > > >>>> Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
> > > >>>
> > > >>> The thread pool is used by things outside of the file-* block drivers,
> > > >>> too. Even outside the block layer. Not all of these seem to submit 
> > > >>> work
> > > >>> in the same thread.
> > > >>>
> > > >>>
> > > >>> For example:
> > > >>>
> > > >>> postcopy_ram_listen_thread() -> qemu_loadvm_state_main() ->
> > > >>> qemu_loadvm_section_start_full() -> vmstate_load() ->
> > > >>> vmstate_load_state() -> spapr_nvdimm_flush_post_load(), which has:
> > > >>>
> > > >>> ThreadPool *pool = aio_get_thread_pool(qemu_get_aio_context());
> >                          ^^^^^^^^^^^^^^^^^^^
> > 
> > aio_get_thread_pool() isn't thread safe either:
> > 
> >   ThreadPool *aio_get_thread_pool(AioContext *ctx)
> >   {
> >       if (!ctx->thread_pool) {
> >           ctx->thread_pool = thread_pool_new(ctx);
> >       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> > 
> > Two threads could race in aio_get_thread_pool().
> > 
> > I think post-copy is broken here: it's calling code that was only
> > designed to be called from the main loop thread.
> > 
> > I have CCed Juan and David.
> 
> In theory the path that you describe there shouldn't happen - although
> there is perhaps not enough protection on the load side to stop it
> happening if presented with a bad stream.
> This is documented in docs/devel/migration.rst under 'Destination
> behaviour'; but to recap, during postcopy load we have a problem that we
> need to be able to load incoming iterative (ie. RAM) pages during the
> loading of normal devices, because the loading of a device may access
> RAM that's not yet been transferred.
> 
> To do that, the device state of all the non-iterative devices (which I
> think includes your spapr_nvdimm) is serialised into a separate
> migration stream and sent as a 'package'.
> 
> We read the package off the stream on the main thread, but don't process
> it until we fire off the 'listen' thread - which you spotted the
> creation of above; the listen thread now takes over reading the
> migration stream to process RAM pages, and since it's in the same
> format, it calls qemu_loadvm_state_main() - but it doesn't expect
> any devices in that other than the RAM devices; it's just expecting RAM.
> 
> In parallel with that, the main thread carries on loading the contents
> of the 'package' - and that contains your spapr_nvdimm device (and any
> other 'normal' devices); but that's OK because that's the main thread.
> 
> Now if something was very broken and sent a header for the spapr-nvdimm
> down the main thread rather than into the package then, yes, we'd
> trigger your case, but that shouldn't happen.

Thanks for explaining that. A way to restrict the listen thread to only
process RAM pages would be good both as documentation and to prevent
invalid migration streams for causing problems.

For Emanuele and Kevin's original question about this code, it seems the
thread pool won't be called from the listen thread.

Stefan

Attachment: signature.asc
Description: PGP signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]