|Subject:||[Gluster-devel] glusterfsd core dumps|
|Date:||Tue, 6 May 2008 17:39:44 +0100 (BST)|
|User-agent:||Alpine 1.10 (LRH 962 2008-03-14)|
Hi,I've just observed what seems like a problem related to remote startind glusterfsd over ssh. If I ssh into one of my glusterfs servers, su to root and start glusterfsd, it starts fine and everything works. However, as soon as I log out, glusterfsd seems to die and core dump.
If I do it with nohup it doesn't seem to happen, so I'm guessing it's the session reset that causes the problem. I'm guessing this isn't the expected behaviour. An uncaught signal (SIGHUP?) somewhere, perhaps?
This only seems to have started happening since I added the posix lock brick and re-ordered the storage volume bricks so they are listed in the same order on all servers.
|[Prev in Thread]||Current Thread||[Next in Thread]|