qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v6 4/5] 9pfs: T_readdir latency optimization


From: Christian Schoenebeck
Subject: Re: [PATCH v6 4/5] 9pfs: T_readdir latency optimization
Date: Tue, 30 Jun 2020 17:16:40 +0200

On Montag, 29. Juni 2020 18:39:02 CEST Greg Kurz wrote:
> On Wed, 03 Jun 2020 19:16:08 +0200
> 
> Christian Schoenebeck <qemu_oss@crudebyte.com> wrote:
> > On Sonntag, 19. April 2020 17:06:17 CEST Christian Schoenebeck wrote:
> > > Make top half really top half and bottom half really bottom half:
> > > 
> > > Each T_readdir request handling is hopping between threads (main
> > > I/O thread and background I/O driver threads) several times for
> > > every individual directory entry, which sums up to huge latencies
> > > for handling just a single T_readdir request.
> > > 
> > > Instead of doing that, collect now all required directory entries
> > > (including all potentially required stat buffers for each entry) in
> > > one rush on a background I/O thread from fs driver by calling the
> > > previously added function v9fs_co_readdir_many() instead of
> > > v9fs_co_readdir(), then assemble the entire resulting network
> > > response message for the readdir request on main I/O thread. The
> > > fs driver is still aborting the directory entry retrieval loop
> > > (on the background I/O thread inside of v9fs_co_readdir_many())
> > > as soon as it would exceed the client's requested maximum R_readdir
> > > response size. So this will not introduce a performance penalty on
> > > another end.
> > > 
> > > Signed-off-by: Christian Schoenebeck <qemu_oss@crudebyte.com>
> > > ---
> > > 
> > >  hw/9pfs/9p.c | 122 +++++++++++++++++++++++----------------------------
> > >  1 file changed, 55 insertions(+), 67 deletions(-)
> > 
> > Ping. Anybody?
> > 
> > I would like to roll out this patch set soon and this is the only patch in
> > this series missing a review yet.
> 
> Hi Christian,

Hi Greg,

> Sorry for getting back to this only now :-\
> 
> So I still have some concerns about the locking of the directory stream
> pointer a fid. It was initially introduced to avoid concurrent accesses
> by multiple threads to the corresponding internal glibc object, as
> recommended in the readdir(3) manual page. Now, this patch considerably
> extends the critical section to also contain calls to telldir() and all
> the _many_ readdir()... so I'm not sure exactly what's the purpose of
> that mutex right now. Please provide more details.

The intention of this lock is to prevent concurrent r/w (i.e. telldir()/
readdir()) of the dir stream position by two or more active readdir requests 
handled in parallel, provided that they would use the same FID. Due to the 
latter requirement for this to become relevant, we already agreed that this is 
not something that would usually happen with a Linux 9p client, but there is 
nothing from protocol point of view that would prohibit this to be done by a 
client, so the resulting undefined behaviour should be prevented, which this 
lock does.

What's your precise concern?

Best regards,
Christian Schoenebeck





reply via email to

[Prev in Thread] Current Thread [Next in Thread]