qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: virtiofs vs 9p performance(Re: tools/virtiofs: Multi threading seems


From: Dr. David Alan Gilbert
Subject: Re: virtiofs vs 9p performance(Re: tools/virtiofs: Multi threading seems to hurt performance)
Date: Fri, 25 Sep 2020 19:51:47 +0100
User-agent: Mutt/1.14.6 (2020-07-11)

* Christian Schoenebeck (qemu_oss@crudebyte.com) wrote:
> On Freitag, 25. September 2020 15:05:38 CEST Dr. David Alan Gilbert wrote:
> > > > 9p ( mount -t 9p -o trans=virtio kernel /mnt
> > > > -oversion=9p2000.L,cache=mmap,msize=1048576 ) test: (g=0): rw=randrw,
> > > 
> > > Bottleneck ------------------------------^
> > > 
> > > By increasing 'msize' you would encounter better 9P I/O results.
> > 
> > OK, I thought that was bigger than the default;  what number should I
> > use?
> 
> It depends on the underlying storage hardware. In other words: you have to 
> try 
> increasing the 'msize' value to a point where you no longer notice a negative 
> performance impact (or almost). Which is fortunately quite easy to test on 
> guest like:
> 
>       dd if=/dev/zero of=test.dat bs=1G count=12
>       time cat test.dat > /dev/null
> 
> I would start with an absolute minimum msize of 10MB. I would recommend 
> something around 100MB maybe for a mechanical hard drive. With a PCIe flash 
> you probably would rather pick several hundred MB or even more.
> 
> That unpleasant 'msize' issue is a limitation of the 9p protocol: client 
> (guest) must suggest the value of msize on connection to server (host). 
> Server 
> can only lower, but not raise it. And the client in turn obviously cannot see 
> host's storage device(s), so client is unable to pick a good value by itself. 
> So it's a suboptimal handshake issue right now.

It doesn't seem to be making a vast difference here:



9p mount -t 9p -o trans=virtio kernel /mnt 
-oversion=9p2000.L,cache=mmap,msize=104857600

Run status group 0 (all jobs):
   READ: bw=62.5MiB/s (65.6MB/s), 62.5MiB/s-62.5MiB/s (65.6MB/s-65.6MB/s), 
io=3070MiB (3219MB), run=49099-49099msec
  WRITE: bw=20.9MiB/s (21.9MB/s), 20.9MiB/s-20.9MiB/s (21.9MB/s-21.9MB/s), 
io=1026MiB (1076MB), run=49099-49099msec

9p mount -t 9p -o trans=virtio kernel /mnt 
-oversion=9p2000.L,cache=mmap,msize=1048576000

Run status group 0 (all jobs):
   READ: bw=65.2MiB/s (68.3MB/s), 65.2MiB/s-65.2MiB/s (68.3MB/s-68.3MB/s), 
io=3070MiB (3219MB), run=47104-47104msec
  WRITE: bw=21.8MiB/s (22.8MB/s), 21.8MiB/s-21.8MiB/s (22.8MB/s-22.8MB/s), 
io=1026MiB (1076MB), run=47104-47104msec


Dave

> Many users don't even know this 'msize' parameter exists and hence run with 
> the Linux kernel's default value of just 8kB. For QEMU 5.2 I addressed this 
> by 
> logging a performance warning on host side for making users at least aware 
> about this issue. The long-term plan is to pass a good msize value from host 
> to guest via virtio (like it's already done for the available export tags) 
> and 
> the Linux kernel would default to that instead.
> 
> Best regards,
> Christian Schoenebeck
> 
> 
-- 
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK




reply via email to

[Prev in Thread] Current Thread [Next in Thread]