qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] block: Fix race in gluster_finish_aiocb


From: Bharata B Rao
Subject: Re: [Qemu-devel] [PATCH] block: Fix race in gluster_finish_aiocb
Date: Thu, 22 Aug 2013 15:25:42 +0530
User-agent: Mutt/1.5.21 (2010-09-15)

On Thu, Aug 22, 2013 at 11:06:47AM +0200, Paolo Bonzini wrote:
> Il 22/08/2013 09:48, Stefan Hajnoczi ha scritto:
> >> > gluster_finish_aiocb gets called from gluster thread, is it safe to 
> >> > create
> >> > and schedule a bh from such a thread ?
> >> > 
> >> > In my first implementation 
> >> > (http://lists.gnu.org/archive/html/qemu-devel/2012-06/msg01748.html), I 
> >> > was using a BH from qemu read side thread (the thread
> >> > that would respond to pipe write from gluster callback thread). That
> >> > implementation was based on rbd and I later dropped the BH part since it
> >> > looked like a round about way of completing the aio when we are already 
> >> > using
> >> > the pipe mechanism for aio completion.
> > Recent patches made creating and scheduling a BH thread-safe.
> 
> I thought scheduling BHs was always thread safe?
> 
> > I think Paolo's idea is better than mine.
> 
> Both are fine, they are just different.  Mine is simpler because it
> leaves list management to the BH code.
> 
> But since we are at it, we should simplify the code and uniformly use a
> bottom half for both successful and erroneous completions.  This applies
> to both ideas.
> 
> Maybe an even simpler patch would be to just abort if the
> GLUSTER_FD_WRITE write fails.

This was the first apporach I had. I used to abort when writes to pipe
fail. But there were concerns raised about handling the failures gracefully
and hence we ended up doing all that error handling of completing the aio
with -EIO, closing the pipe and making the disk inaccessible.

> Under what circumstances could it happen?

Not very sure, I haven't seen that happening. I had to manually inject
faults to test this error path and verify the graceful recovery.

Regards,
Bharata.




reply via email to

[Prev in Thread] Current Thread [Next in Thread]