qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-devel] [Bug 1308542] Re: hang in qemu_gluster_init


From: John Eckersberg
Subject: [Qemu-devel] [Bug 1308542] Re: hang in qemu_gluster_init
Date: Mon, 21 Apr 2014 13:03:18 -0000

Also I should add a note to this related bug I filed against gluster -
https://bugzilla.redhat.com/show_bug.cgi?id=1088589

This bug is what causes the glfs_set_logging call to fail, and fall into
the hang case.

** Bug watch added: Red Hat Bugzilla #1088589
   https://bugzilla.redhat.com/show_bug.cgi?id=1088589

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1308542

Title:
  hang in qemu_gluster_init

Status in QEMU:
  New

Bug description:
  In qemu_gluster_init, if the call to either glfs_set_volfile_server or
  glfs_set_logging fails into the "out" case, glfs_fini is called
  without having first calling glfs_init.  This causes glfs_lock to spin
  forever on this bit:

        while (!fs->init)
                pthread_cond_wait (&fs->cond, &fs->mutex);

  And here's the bottom part of the backtrace when hung:

  #0  pthread_cond_wait@@GLIBC_2.3.2 () at 
../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:183
  #1  0x00007feceebf58c3 in glfs_lock (fs=0x7fecf15660b0) at glfs-internal.h:156
  #2  glfs_active_subvol (fs=0x7fecf15660b0) at glfs-resolve.c:799
  #3  0x00007feceebeb5b4 in glfs_fini (fs=0x7fecf15660b0) at glfs.c:652
  #4  0x00007fecf0043c73 in qemu_gluster_init (gconf=<value optimized out>, 
filename=<value optimized out>) at 
/usr/src/debug/qemu-kvm-0.12.1.2/block/gluster.c:229

  I believe this can be fixed by simply moving the call to glfs_init
  after the call to glfs_new but before the calls to
  glfs_set_volfile_server or glfs_set_logging.

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1308542/+subscriptions



reply via email to

[Prev in Thread] Current Thread [Next in Thread]