qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Question on memory commit during MR finalize()


From: Peter Xu
Subject: Re: Question on memory commit during MR finalize()
Date: Mon, 19 Jul 2021 11:56:04 -0400

Hi, Thanos,

On Mon, Jul 19, 2021 at 02:38:52PM +0000, Thanos Makatos wrote:
> I can trivially trigger an assertion with a build where I merged the recent 
> vfio-user patches 
> (cover.1626675354.git.elena.ufimtseva@oracle.com/">https://patchew.org/QEMU/cover.1626675354.git.elena.ufimtseva@oracle.com/) 
> to master and then merging the result into your xzpeter/memory-sanity branch, 
> I've pushed the branch here: 
> https://github.com/tmakatos/qemu/tree/memory-sanity. I explain the repro 
> steps below in case you want to take a look:
> 
> Build as follows:
> 
> ./configure --prefix=/opt/qemu-xzpeter --target-list=x86_64-softmmu 
> --enable-kvm  --enable-debug --enable-multiprocess && make -j `nproc` && make 
> install
> 
> Then build and run the GPIO sample from libvfio-user 
> (https://github.com/nutanix/libvfio-user):
> 
> libvfio-user/build/dbg/samples/gpio-pci-idio-16 -v /var/run/vfio-user.sock
> 
> And then run QEMU as follows:
> 
> gdb --args /opt/qemu-xzpeter/bin/qemu-system-x86_64 -cpu host -enable-kvm 
> -smp 4 -m 2G -object 
> memory-backend-file,id=mem0,size=2G,mem-path=/dev/hugepages,share=on,prealloc=yes
>  -numa node,memdev=mem0 -kernel bionic-server-cloudimg-amd64-vmlinuz-generic 
> -initrd bionic-server-cloudimg-amd64-initrd-generic -append 'console=ttyS0 
> root=/dev/sda1 single' -hda bionic-server-cloudimg-amd64-0.raw -nic 
> user,model=virtio-net-pci -machine pc-q35-3.1 -device 
> vfio-user-pci,socket=/var/run/vfio-user.sock -nographic
> 
> I immediately get the following stack trace:
> 
> Thread 5 "qemu-system-x86" received signal SIGUSR1, User defined signal 1.

This is SIGUSR1.  QEMU uses it for general vcpu ipis.

> [Switching to Thread 0x7fffe6e82700 (LWP 151973)]
> __lll_lock_wait () at ../sysdeps/unix/sysv/linux/x86_64/lowlevellock.S:103
> 103     ../sysdeps/unix/sysv/linux/x86_64/lowlevellock.S: No such file or 
> directory.
> (gdb) bt
> #0  0x00007ffff655d29c in __lll_lock_wait () at 
> ../sysdeps/unix/sysv/linux/x86_64/lowlevellock.S:103
> #1  0x00007ffff6558642 in __pthread_mutex_cond_lock 
> (mutex=mutex@entry=0x5555568bb280 <qemu_global_mutex>) at 
> ../nptl/pthread_mutex_lock.c:80
> #2  0x00007ffff6559ef8 in __pthread_cond_wait_common (abstime=0x0, 
> mutex=0x5555568bb280 <qemu_global_mutex>, cond=0x555556cecc30) at 
> pthread_cond_wait.c:645
> #3  0x00007ffff6559ef8 in __pthread_cond_wait (cond=0x555556cecc30, 
> mutex=0x5555568bb280 <qemu_global_mutex>) at pthread_cond_wait.c:655
> #4  0x000055555604f717 in qemu_cond_wait_impl (cond=0x555556cecc30, 
> mutex=0x5555568bb280 <qemu_global_mutex>, file=0x5555561ca869 
> "../softmmu/cpus.c", line=514) at ../util/qemu-thread-posix.c:194
> #5  0x0000555555d28a4a in qemu_cond_wait_iothread (cond=0x555556cecc30) at 
> ../softmmu/cpus.c:514
> #6  0x0000555555d28781 in qemu_wait_io_event (cpu=0x555556ce02c0) at 
> ../softmmu/cpus.c:425
> #7  0x0000555555e5da75 in kvm_vcpu_thread_fn (arg=0x555556ce02c0) at 
> ../accel/kvm/kvm-accel-ops.c:54
> #8  0x000055555604feed in qemu_thread_start (args=0x555556cecc70) at 
> ../util/qemu-thread-posix.c:541
> #9  0x00007ffff6553fa3 in start_thread (arg=<optimized out>) at 
> pthread_create.c:486
> #10 0x00007ffff64824cf in clone () at 
> ../sysdeps/unix/sysv/linux/x86_64/clone.S:95

Would you please add below to your ~/.gdbinit script?

  handle SIGUSR1 nostop noprint

Or just run without gdb and wait it to crash with SIGABRT.

Thanks,

-- 
Peter Xu




reply via email to

[Prev in Thread] Current Thread [Next in Thread]