qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH RFC 0/3] iothread: release iothread around aio_p


From: Stefan Hajnoczi
Subject: Re: [Qemu-devel] [PATCH RFC 0/3] iothread: release iothread around aio_poll
Date: Tue, 21 Apr 2015 16:40:34 +0100

On Tue, Mar 31, 2015 at 11:35 AM, Paolo Bonzini <address@hidden> wrote:
> On 20/02/2015 17:26, Paolo Bonzini wrote:
>> Right now, iothreads are relying on a "contention callback" to release
>> the AioContext (e.g. for block device operations or to do bdrv_drain_all).
>> This is necessary because aio_poll needs to be called within an
>> aio_context_acquire.
>>
>> This series drops this requirement for aio_poll, with two effects:
>>
>> 1) it makes it possible to remove the "contention callback" in RFifoLock
>> (and possibly to convert it to a normal GRecMutex, which is why I am not
>> including a patch to remove callbacks from RFifoLock).
>>
>> 2) it makes it possible to start work around making critical sections
>> for the block layer fine-grained.
>>
>> In order to do this, some data is moved from AioContext to local storage.
>> Stack allocation has size limitations, so thread-local storage is used
>> instead.  There are no reentrancy problems because the data is only live
>> throughout a small part of aio_poll, and in particular not during the
>> invocation of callbacks.
>>
>> Comments?
>
> Stefan, can you put this on track for 2.4 or do you need a repost?

This series causes qemu-iotests -qcow2 091 to fail:

9f83aea22314d928bb272153ff37d2d7f5adbf06 is the first bad commit
commit 9f83aea22314d928bb272153ff37d2d7f5adbf06
Author: Paolo Bonzini <address@hidden>
Date:   Fri Feb 20 17:26:50 2015 +0100

    aio-posix: move pollfds to thread-local storage

I think the following assertion failure is hit in pollfds_cleanup():
g_assert(npfd == 0);

$ (make -j4 && cd tests/qemu-iotests && ./check -qcow2 091)
QEMU          -- ./qemu
QEMU_IMG      -- ./qemu-img
QEMU_IO       -- ./qemu-io
QEMU_NBD      -- ./qemu-nbd
IMGFMT        -- qcow2 (compat=1.1)
IMGPROTO      -- file
PLATFORM      -- Linux/x86_64 stefanha-thinkpad 4.0.0-rc5.bz1006536+
TEST_DIR      -- /home/stefanha/qemu/tests/qemu-iotests/scratch
SOCKET_SCM_HELPER -- /home/stefanha/qemu/tests/qemu-iotests/socket_scm_helper

091 1s ... [failed, exit status 141] - output mismatch (see 091.out.bad)
--- /home/stefanha/qemu/tests/qemu-iotests/091.out    2015-03-05
20:42:23.227070978 +0000
+++ 091.out.bad    2015-04-21 16:33:02.769945594 +0100
@@ -11,18 +11,4 @@

 vm1: qemu-io disk write complete
 vm1: live migration started
-vm1: live migration completed
-
-=== VM 2: Post-migration, write to disk, verify running ===
-
-vm2: qemu-io disk write complete
-vm2: qemu process running successfully
-vm2: flush io, and quit
-Check image pattern
-read 4194304/4194304 bytes at offset 0
-4 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
-Running 'qemu-img check -r all $TEST_IMG'
-No errors were found on the image.
-80/16384 = 0.49% allocated, 0.00% fragmented, 0.00% compressed clusters
-Image end offset: 5570560
-*** done
+./common.qemu: line 1: 26535 Aborted                 (core dumped)

More info:
Command Line: ./qemu -nographic -serial none -monitor stdio -machine
accel=qtest -drive
file=/home/stefanha/qemu/tests/qemu-iotests/scratch/t.qcow2,cache=none,id=disk

                Stack trace of thread 26556:
                #0  0x00007f1ed8dd98d7 __GI_raise (libc.so.6)
                #1  0x00007f1ed8ddb53a __GI_abort (libc.so.6)
                #2  0x00007f1ee13075d5 g_assertion_message (libglib-2.0.so.0)
                #3  0x00007f1ee130766a g_assertion_message_expr
(libglib-2.0.so.0)
                #4  0x00007f1ee3100001 pollfds_cleanup (qemu-system-x86_64)
                #5  0x00007f1ee3177374 notifier_list_notify (qemu-system-x86_64)
                #6  0x00007f1ee316c812 qemu_thread_atexit_run
(qemu-system-x86_64)
                #7  0x00007f1ee19dd1d9 __nptl_deallocate_tsd (libpthread.so.0)
                #8  0x00007f1ee19de5e5 __nptl_deallocate_tsd (libpthread.so.0)
                #9  0x00007f1ed8ea522d __clone (libc.so.6)

                Stack trace of thread 26535:
                #0  0x00007f1ee19df5e5 pthread_join (libpthread.so.0)
                #1  0x00007f1ee316ce9f qemu_thread_join (qemu-system-x86_64)
                #2  0x00007f1ee30adbfa migrate_fd_cleanup (qemu-system-x86_64)
                #3  0x00007f1ee30f1614 aio_bh_poll (qemu-system-x86_64)
                #4  0x00007f1ee3100260 aio_dispatch (qemu-system-x86_64)
                #5  0x00007f1ee30f149e aio_ctx_dispatch (qemu-system-x86_64)
                #6  0x00007f1ee12e17fb g_main_dispatch (libglib-2.0.so.0)
                #7  0x00007f1ee30fee38 glib_pollfds_poll (qemu-system-x86_64)
                #8  0x00007f1ee2ec024e main_loop (qemu-system-x86_64)
                #9  0x00007f1ed8dc4fe0 __libc_start_main (libc.so.6)
                #10 0x00007f1ee2ec566c _start (qemu-system-x86_64)

                Stack trace of thread 26540:
                #0  0x00007f1ed8e9f939 syscall (libc.so.6)
                #1  0x00007f1ee316cc71 futex_wait (qemu-system-x86_64)
                #2  0x00007f1ee317af96 call_rcu_thread (qemu-system-x86_64)
                #3  0x00007f1ee19de52a start_thread (libpthread.so.0)
                #4  0x00007f1ed8ea522d __clone (libc.so.6)

                Stack trace of thread 26541:
                #0  0x00007f1ee19e57f0 sem_timedwait (libpthread.so.0)
                #1  0x00007f1ee316cac7 qemu_sem_timedwait (qemu-system-x86_64)
                #2  0x00007f1ee30f1b1c worker_thread (qemu-system-x86_64)
                #3  0x00007f1ee19de52a start_thread (libpthread.so.0)
                #4  0x00007f1ed8ea522d __clone (libc.so.6)

                Stack trace of thread 26544:
                #0  0x00007f1ee19e6e50 do_sigwait (libpthread.so.0)
                #1  0x00007f1ee2eec543 qemu_dummy_cpu_thread_fn
(qemu-system-x86_64)
                #2  0x00007f1ee19de52a start_thread (libpthread.so.0)
                #3  0x00007f1ed8ea522d __clone (libc.so.6)

Stefan



reply via email to

[Prev in Thread] Current Thread [Next in Thread]