qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH -V6 00/21] virtio-9p: paravirtual file system pa


From: Anthony Liguori
Subject: Re: [Qemu-devel] [PATCH -V6 00/21] virtio-9p: paravirtual file system passthrough
Date: Mon, 03 May 2010 12:29:22 -0500
User-agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.1.5) Gecko/20091209 Fedora/3.0-4.fc12 Lightning/1.0pre Thunderbird/3.0

On 04/29/2010 07:14 AM, Aneesh Kumar K.V wrote:
Hi,

This patch series adds VirtFS to QEMU. VirtFS is code named for a 9P filesystem
server in QEMU enabling paravirtual filesystem pass-trhough between KVM host 
and guest.

Applied all.  Thanks.

Regards,

Anthony Liguori

VirtFS is intended to offer an alternative to using NFS/CIFS to share host 
filesystems on
guest and provides better performance. Initial tests showed significantly 
better performance
than NFS and CIFS. Performance numbers are provided towards end of the mail.

With the current implementation, all I/O is implemented in the VCPU thread.
We've modified the protocol handlers so that we can support dispatch I/O in a 
thread pool.
The actual thread pool implementation will be posted later.

This patch set should work with any recent Linux kernel as virtio-9p has been
supported for a few kernel releases now. Export dir is specified using the below
Qemu option.

-fsdev fstype,id=ID,path=path/to/share \
            -device virtio-9p-pci,fsdev=ID,mount_tag=tag \
or

-virtfs fstype,path=path/to/share,mount_tag=tag

Only supported fstype currently is "local". mount_tag is used to identify
the mount point in the kernel. This will be available in Linux
kernel via /sys/devices/virtio-pci/virtio1/mount_tag file.

Changes from V5:
1) Rebased to qemu master 9ed7b059ef776a3921cfd085e891f45076922542
2) Use endian conversion when passing tag len from qemu to guest.
3) Use QLIST instead of open coding a list
4) Fix build error with a srcdir != objdir
5) Remove --enable/disable-linux-virtfs. VirtFS is now default enabled on linux

Changes from V4:
1) Rebased to qemu master bf3de7f16f2ab9e2ce57704e0b8a19e929dbf73e
2) Fix for readdir not listing full directory entries after an fsstress run

Changes from V3:
1) Makefiles are modified so that this code is compiled only on Linux.
2) Replaced vasprintf() with qemu_malloc() followed by sprintf().
3) Formatting changes per QEMU coding standards.
4) Folded bug fixes to original patches
c) configure option to enable/disable virtfs

  Changes from V2:
1) Added new method for specifying export dir. This new method should be more 
flexible.
2) rebased to qemu master bedd2912c83b1a87a6bfe3f59a892fd65cda7084

Changes from V1:
1) fsstress test suite runs successfully with the patches. That should indicate
     patches are stable enough to be merged.
2) Added proper error handling to all posix_* calls.
3) Fixed code to follow Qemu coding style.
4) Other bug fixes most of which are folded back into the original patches
5) rebased to qemu master 0aef4261ac0ec9089ade0e3a92f986cb4ba7317e

Performance details:

# Host

     * 3650M2
     * 16 CPU, 32 GB RAM
     * 28 JBOD disks (18G each)
     * DM striped to make a single 400GB disk/filesystem
     * 400 GB ext3 filesystem exported/serving data in the host
     * RHEL5.5, QEMU + 9p Server patches

# Guest

     * 1 vCPU, 4GB memory
     * virtio network access for CIFS, NFS
     * virtio transport for virtfs mount
     * 2.6.33-rc8 + v9 fixes (either on mainline or on 9p-devel list)

# Tests:

     * Created 16 20-GB files on the filesystem
     * Guest mounts filesystem through v9 (virtio), NFS (virtio), CIFS (virtio)
     * Perform sequential read and sequential write tests on these 16 20-GB 
files from the guest
     * Repeat the tests with various threads/processes (dd) counts
     * Between each test host and guest unmounts and mounts the filesystem to 
eliminate any caching affects.

# read tests (sample):
for i in 1 2 3 4 5 6 7 8 do; time dd of=/dev/null if=./file$i bs=2M 
count=10240&  ; done

#of Threads  |  1          2         4         8
----------------------------------------------------
VirtFs(MB/S) |  172        170       168       162
CIFS(MB/S)   |  10         12        22        35
NFS(MB/S)    |  80         70        62        42

# write tests (sample):
for i in 1 2 3 4 5 6 7 8 do; time dd if=/dev/zero of=./file$i bs=2M 
count=10240&  ; done

#of Threads  |  1          2         4          8
-------------------------------------------------------
VirtFs(MB/S) |  190        182       150       138
CIFS(MB/S)   |  30         38        78        100
NFS(MB/S)    |  35         35        38        37


-aneesh










reply via email to

[Prev in Thread] Current Thread [Next in Thread]