gluster-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Gluster-devel] Behaviour of glfs_fini() affecting QEMU


From: Bharata B Rao
Subject: [Gluster-devel] Behaviour of glfs_fini() affecting QEMU
Date: Thu, 17 Apr 2014 18:58:44 +0530

Hi,

In QEMU, we initialize gfapi in the following manner:

********************
glfs = glfs_new();
if (!glfs)
   goto out;
if (glfs_set_volfile_server() < 0)
   goto out;
if (glfs_set_logging() < 0)
   goto out;
if (glfs_init(glfs))
   goto out;

...

out:
if (glfs)
   glfs_fini(glfs)
*********************

Now if either glfs_set_volfile_server() or glfs_set_logging() fails, we end up doing glfs_fini() which eventually hangs in glfs_lock().

#0  0x00007ffff554a595 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
#1  0x00007ffff79d312e in glfs_lock (fs=0x555556331310) at glfs-internal.h:176
#2  0x00007ffff79d5291 in glfs_active_subvol (fs=0x555556331310) at glfs-resolve.c:811
#3  0x00007ffff79c9f23 in glfs_fini (fs=0x555556331310) at glfs.c:753

Note that we haven't done glfs_init() in this failure case.

- Is this failure expected ? If so, what is the recommended way of releasing the glfs object ?
- Does glfs_fini() depend on glfs_init() to have worked successfully ?
- Since QEMU-GlusterFS driver was developed when libgfapi was very new, can gluster developers take a look at the order of the glfs_* calls we are making in QEMU and suggest any changes, improvements or additions now given that libgfapi has seen a lot of development ?

Regards,
Bharata.

reply via email to

[Prev in Thread] Current Thread [Next in Thread]