qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] block/file-posix: add bdrv_attach_aio_context c


From: Farhan Ali
Subject: Re: [Qemu-devel] [PATCH] block/file-posix: add bdrv_attach_aio_context callback for host dev and cdrom
Date: Mon, 23 Jul 2018 09:34:06 -0400
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.4.0



On 07/20/2018 03:11 PM, Farhan Ali wrote:
I am seeing another issue pop up, in a different test. Even though it's a different assertion, it might be related based on the call trace.

Stack trace of thread 276199:
#0  0x000003ff8473e274 raise (libc.so.6)
#1  0x000003ff847239a8 abort (libc.so.6)
#2  0x000003ff847362ce __assert_fail_base (libc.so.6)
#3  0x000003ff8473634c __assert_fail (libc.so.6)
#4  0x000002aa30aba0c4 iov_memset (qemu-system-s390x)
#5  0x000002aa30aba9a6 qemu_iovec_memset (qemu-system-s390x)
#6  0x000002aa30a23e88 qemu_laio_process_completion (qemu-system-s390x)
#7  0x000002aa30a23f68 qemu_laio_process_completions (qemu-system-s390x)
#8  0x000002aa30a2418e qemu_laio_process_completions_and_submit (qemu-system-s390x)
#9  0x000002aa30a24220 qemu_laio_poll_cb (qemu-system-s390x)
#10 0x000002aa30ab22c4 run_poll_handlers_once (qemu-system-s390x)
#11 0x000002aa30ab2e78 aio_poll (qemu-system-s390x)
#12 0x000002aa30a29f4e bdrv_do_drained_begin (qemu-system-s390x)
#13 0x000002aa30a2a276 bdrv_drain (qemu-system-s390x)
#14 0x000002aa309d45aa bdrv_set_aio_context (qemu-system-s390x)
#15 0x000002aa3085acfe virtio_blk_data_plane_stop (qemu-system-s390x)
#16 0x000002aa3096994c virtio_bus_stop_ioeventfd.part.1 (qemu-system-s390x)
#17 0x000002aa3087d1d6 virtio_vmstate_change (qemu-system-s390x)
#18 0x000002aa308e8a12 vm_state_notify (qemu-system-s390x)
#19 0x000002aa3080ed54 do_vm_stop (qemu-system-s390x)
#20 0x000002aa307bea04 main (qemu-system-s390x)
#21 0x000003ff84723dd2 __libc_start_main (libc.so.6)
#22 0x000002aa307c0414 _start (qemu-system-s390x)


The failing assertion is:

qemu-kvm: util/iov.c:78: iov_memset: Assertion `offset == 0' failed.


Just to give some context, this a guest with 2 disks with each assigned an iothread. The guest was running a memory intensive workload.

From the coredump of the qemu process, I see there were 2 threads that were trying to call aio_poll with the same AioContext on the same BlockDeviceDriver

Thread 1:

#0  0x000003ff8473e274 in raise () from /lib64/libc.so.6
#1  0x000003ff847239a8 in abort () from /lib64/libc.so.6
#2  0x000003ff847362ce in __assert_fail_base () from /lib64/libc.so.6
#3  0x000003ff8473634c in __assert_fail () from /lib64/libc.so.6
#4 0x000002aa30aba0c4 in iov_memset (iov=<optimized out>, iov_cnt=<optimized out>, offset=<optimized out>, fillc=<optimized out>, bytes=18446744073709547520) at /usr/src/debug/qemu-2.12.91-20180720.0.677af45304.fc28.s390x/util/iov.c:78 #5 0x000002aa30aba9a6 in qemu_iovec_memset (qiov=<optimized out>, address@hidden, address@hidden, bytes=18446744073709547520) at /usr/src/debug/qemu-2.12.91-20180720.0.677af45304.fc28.s390x/util/iov.c:410 #6 0x000002aa30a23e88 in qemu_laio_process_completion (laiocb=0x3fe36a6a3f0) at /usr/src/debug/qemu-2.12.91-20180720.0.677af45304.fc28.s390x/block/linux-aio.c:88 #7 0x000002aa30a23f68 in qemu_laio_process_completions (address@hidden) at /usr/src/debug/qemu-2.12.91-20180720.0.677af45304.fc28.s390x/block/linux-aio.c:222 #8 0x000002aa30a2418e in qemu_laio_process_completions_and_submit (s=0x3fe60001910) at /usr/src/debug/qemu-2.12.91-20180720.0.677af45304.fc28.s390x/block/linux-aio.c:237 #9 0x000002aa30a24220 in qemu_laio_poll_cb (opaque=<optimized out>) at /usr/src/debug/qemu-2.12.91-20180720.0.677af45304.fc28.s390x/block/linux-aio.c:272 #10 0x000002aa30ab22c4 in run_poll_handlers_once (address@hidden) at /usr/src/debug/qemu-2.12.91-20180720.0.677af45304.fc28.s390x/util/aio-posix.c:494 #11 0x000002aa30ab2e78 in try_poll_mode (blocking=<optimized out>, ctx=<optimized out>) at /usr/src/debug/qemu-2.12.91-20180720.0.677af45304.fc28.s390x/util/aio-posix.c:573 ====> #12 aio_poll (ctx=0x2aa4f35df50, address@hidden) at /usr/src/debug/qemu-2.12.91-20180720.0.677af45304.fc28.s390x/util/aio-posix.c:602 #13 0x000002aa30a29f4e in bdrv_drain_poll_top_level (ignore_parent=<optimized out>, recursive=<optimized out>, bs=<optimized out>) at /usr/src/debug/qemu-2.12.91-20180720.0.677af45304.fc28.s390x/block/io.c:390 #14 bdrv_do_drained_begin (bs=0x2aa4f392510, recursive=<optimized out>, parent=0x0, ignore_bds_parents=<optimized out>, poll=<optimized out>) at /usr/src/debug/qemu-2.12.91-20180720.0.677af45304.fc28.s390x/block/io.c:390 #15 0x000002aa30a2a276 in bdrv_drained_begin (bs=0x2aa4f392510) at /usr/src/debug/qemu-2.12.91-20180720.0.677af45304.fc28.s390x/block/io.c:396 #16 bdrv_drain (address@hidden) at /usr/src/debug/qemu-2.12.91-20180720.0.677af45304.fc28.s390x/block/io.c:478 #17 0x000002aa309d45aa in bdrv_set_aio_context (bs=0x2aa4f392510, new_context=0x2aa4f3594f0) at /usr/src/debug/qemu-2.12.91-20180720.0.677af45304.fc28.s390x/block.c:4954 #18 0x000002aa30a1c228 in blk_set_aio_context (address@hidden, new_context=<optimized out>) at /usr/src/debug/qemu-2.12.91-20180720.0.677af45304.fc28.s390x/block/block-backend.c:1894 #19 0x000002aa3085acfe in virtio_blk_data_plane_stop (vdev=<optimized out>) at /usr/src/debug/qemu-2.12.91-20180720.0.677af45304.fc28.s390x/hw/block/dataplane/virtio-blk.c:285 #20 0x000002aa3096994c in virtio_bus_stop_ioeventfd (bus=0x2aa4f4f61f0) at /usr/src/debug/qemu-2.12.91-20180720.0.677af45304.fc28.s390x/hw/virtio/virtio-bus.c:246 #21 0x000002aa3087d1d6 in virtio_vmstate_change (opaque=0x2aa4f4f72b8, running=<optimized out>, state=<optimized out>) at /usr/src/debug/qemu-2.12.91-20180720.0.677af45304.fc28.s390x/hw/virtio/virtio.c:2222 #22 0x000002aa308e8a12 in vm_state_notify (running=<optimized out>, state=<optimized out>) at /usr/src/debug/qemu-2.12.91-20180720.0.677af45304.fc28.s390x/vl.c:1532 #23 0x000002aa3080ed54 in do_vm_stop (state=<optimized out>, send_stop=<optimized out>) at /usr/src/debug/qemu-2.12.91-20180720.0.677af45304.fc28.s390x/cpus.c:1012 #24 0x000002aa307bea04 in main (argc=<optimized out>, argv=<optimized out>, envp=<optimized out>) at /usr/src/debug/qemu-2.12.91-20180720.0.677af45304.fc28.s390x/vl.c:4649


Thread 2 which is an IOThread:

#0  0x000003ff84910f9e in __lll_lock_wait () from /lib64/libpthread.so.0
#1  0x000003ff8490a1a2 in pthread_mutex_lock () from /lib64/libpthread.so.0
#2 0x000002aa30ab4cea in qemu_mutex_lock_impl (mutex=0x2aa4f35dfb0, address@hidden "/builddir/build/BUILD/qemu-2.12.91/util/async.c", address@hidden) at /usr/src/debug/qemu-2.12.91-20180720.0.677af45304.fc28.s390x/util/qemu-thread-posix.c:66 #3 0x000002aa30aafff4 in aio_context_acquire (ctx=<optimized out>) at /usr/src/debug/qemu-2.12.91-20180720.0.677af45304.fc28.s390x/util/async.c:511 #4 0x000002aa30a2419a in qemu_laio_process_completions_and_submit (s=0x3fe60001910) at /usr/src/debug/qemu-2.12.91-20180720.0.677af45304.fc28.s390x/block/linux-aio.c:239 #5 0x000002aa30ab23ee in aio_dispatch_handlers (address@hidden) at /usr/src/debug/qemu-2.12.91-20180720.0.677af45304.fc28.s390x/util/aio-posix.c:406 =====> #6 0x000002aa30ab30b4 in aio_poll (ctx=0x2aa4f35df50, address@hidden) at /usr/src/debug/qemu-2.12.91-20180720.0.677af45304.fc28.s390x/util/aio-posix.c:692 #7 0x000002aa308e2322 in iothread_run (opaque=0x2aa4f35d5c0) at /usr/src/debug/qemu-2.12.91-20180720.0.677af45304.fc28.s390x/iothread.c:63
#8  0x000003ff849079a8 in start_thread () from /lib64/libpthread.so.0
#9  0x000003ff847f97ee in thread_start () from /lib64/libc.so.6


This looked a little suspicious to me, I don't if this the expected behavior or there is a race condition here. Any help debugging this would be greatly appreciated.

Thanks
Farhan




reply via email to

[Prev in Thread] Current Thread [Next in Thread]