qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] qemu-nbd performance


From: Eric Blake
Subject: Re: [Qemu-devel] qemu-nbd performance
Date: Tue, 18 Sep 2018 08:36:34 -0500
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.0

On 9/18/18 2:06 AM, lampahome wrote:
I test nbd performance when I divide image into multiple backing files.
The image is 512GB, I divide it into 1, 16, 32, 64, and 128 backing files.

Ex: If I divide it into 16 files, each backing file is 512/16=32GB.
If I divide it into 64 files, each backing file is 512/64=8GB  and so on.

*Mount command: qemu-nbd -c /dev/nbd0 image*

*Cmd to test:*
*Read:*

fio -bs=1m -iodepth=16  -rw=read -ioengine=libaio -name=mytest -direct=1
  -size=512G -runtime=300 -filename=/dev/nbd0

*Write:*

fio -bs=1m -iodepth=16  -rw=read -ioengine=libaio -name=mytest -direct=1
  -size=512G -runtime=300 -filename=/dev/nbd0


All images are on the RAID0(3 SSD).
Below is the performance:

* image numberseq. read(MB/s)seq.
write(MB/s)1148060161453363214503664143036128140036*

Your attempt at sending html mail is corrupted when viewed as plain text. Technical mailing lists tend to prefer plain-text-only emails (less redundant information transferred), and some mailing lists actively strip out the html half of a multi-part alternative MIME message. So I can't tell what you were trying to report here.

The seq. read performance is very well than write.
1. Does nbd cache the data defaultly and make read so quickly?

You're asking about multiple layers of the stack. If you are trying to profile things, you'll need to take care on which part of the stack you are profiling. When you use nbd -c /dev/nbd0, the kernel is collecting requests from the user-space app (fio) and forwarding them on to the qemu-nbd process as server. I'm not sure how much caching the kernel does or does not do; for that, you'd have to check the source to the kernel nbd module. Once it gets to the qemu-nbd process, it SHOULD be handling up to 16 parallel requests (reads or writes), modulo any locking where it has to serialize in order to perform any correct COW operations. On that front, you can use qemu's trace mechanisms to validate which operations are being performed when, and to try and analyze where any hot spots may live. Right now, qemu does not do caching of reads (there has been talk of adding a cache driver, which would benefit more than just nbd, but patches for that have not been finalized).

2. The write performance isn't so good, Does nbd do something to decrease
the performance?

Nothing in particular to intentionally slow things down; but you'd have to profile your setup if you want to identify actual hotspots.

--
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3266
Virtualization:  qemu.org | libvirt.org



reply via email to

[Prev in Thread] Current Thread [Next in Thread]