qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 1/2] hw/9pfs: avoid 'path' copy in v9fs_walk()


From: Christian Schoenebeck
Subject: Re: [PATCH 1/2] hw/9pfs: avoid 'path' copy in v9fs_walk()
Date: Fri, 20 Aug 2021 14:19:21 +0200

On Freitag, 20. August 2021 12:35:49 CEST Greg Kurz wrote:
> On Tue, 17 Aug 2021 14:38:24 +0200
> 
> Christian Schoenebeck <qemu_oss@crudebyte.com> wrote:
> > The v9fs_walk() function resolves all client submitted path nodes to the
> > local 'pathes' array. Using a separate string scalar variable 'path'
> > inside the background worker thread loop and copying that local 'path'
> > string scalar variable subsequently to the 'pathes' array (at the end of
> > each loop iteration) is not necessary.
> > 
> > Instead simply resolve each path directly to the 'pathes' array and
> > don't use the string scalar variable 'path' inside the fs worker thread
> > loop at all.
> > 
> > The only advantage of the 'path' scalar was that in case of an error
> > the respective 'pathes' element would not be filled. Right now this is
> > not an issue as the v9fs_walk() function returns as soon as any error
> > occurs.
> > 
> > Suggested-by: Greg Kurz <groug@kaod.org>
> > Signed-off-by: Christian Schoenebeck <qemu_oss@crudebyte.com>
> > ---
> 
> Reviewed-by: Greg Kurz <groug@kaod.org>
> 
> With this change, the path variable is no longer used at all in the
> first loop. 

Correct.

> I see at least an extra possible cleanup : don't set
> path before the first loop since it gets reset before the second
> one. 

Also correct.

> Maybe we can even get rid of path all the way ? I'll have
> a look.

Yes, that's the plan.

There is still quite a bunch that can be cleaned up in that function. I just 
didn't want to start with a two-digit patch set right after the long summer 
break. ;-)

If you want to send some cleanup patches, always appreciated.

Best regards,
Christian Schoenebeck





reply via email to

[Prev in Thread] Current Thread [Next in Thread]