[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Qemu-devel] qemu-nbd performance
From: |
Eric Blake |
Subject: |
Re: [Qemu-devel] qemu-nbd performance |
Date: |
Tue, 25 Sep 2018 10:31:32 -0500 |
User-agent: |
Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.0 |
On 9/25/18 6:23 AM, lampahome wrote:
I put image on the 3xSSD RAID0, and the raw performance of block device is
read:1500MB/s, write:1400MB/s
The bottleneck I thought is the number of backing files.
The more images I divide into, lower the read performance
An obvious reason for that: right now, the code base asks every backing
file along the chain for a given guest offset until it finds one. So,
comparing:
0-2m <- 2m-4m <- active
0-1m <- 1m-2m <- 2m-3m <- 3m-4m <- active
a read that lands in guest offset 0 currently has to check through twice
as many backing files to find the actual data when you have split the
backing chain in to twice as many files. An obvious improvement: write
a driver comparable to quorum but which concatenates multiple images, at
which point deferring a read to a specific offset will read directly
from the correct image, rather than chasing through a chain of unrelated
images. (But we've already made the suggestion of writing a new driver
for concatenating images, and so far no one has started coding it up).
Write performance looks like bad originally.
Telling us what we already know that performance numbers are low doesn't
improve the situation as much as actually performing profiling to
identify the hotspots and accompanying it with patches to speed up the
low-hanging fruit.
--
Eric Blake, Principal Software Engineer
Red Hat, Inc. +1-919-301-3266
Virtualization: qemu.org | libvirt.org