[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-devel] [qcow2] how to avoid qemu doing lseek(SEEK_DATA/SEEK_HOLE)?

From: Stephane Chazelas
Subject: [Qemu-devel] [qcow2] how to avoid qemu doing lseek(SEEK_DATA/SEEK_HOLE)?
Date: Thu, 2 Feb 2017 12:30:45 +0000
User-agent: Mutt/1.5.24 (2015-08-30)


since qemu-2.7.0, doing synchronised I/O in a VM (tested with
Ubuntu 16.04 amd64 VM)  while the disk is backed by a qcow2
file sitting on a ZFS filesystem (zfs on Linux on Debian jessie
(PVE)), the performances are dreadful:

# time dd if=/dev/zero count=1000  of=b oflag=dsync
1000+0 records in
1000+0 records out
512000 bytes (512 kB, 500 KiB) copied, 21.9908 s, 23.3 kB/s
dd if=/dev/zero count=1000 of=b oflag=dsync  0.00s user 0.04s system 0% cpu 
21.992 total

(22 seconds to write that half megabyte). Same with O_SYNC or
O_DIRECT, or doing fsync() or sync_file_range() after each

I first noticed it for dpkg unpacking kernel headers where dpkg
does a sync_file_range() after each file is extracted.

Note that it doesn't happen when writing anything else than
zeroes (like tr '\0' x < /dev/zero | dd count=1000  of=b
oflag=dsync). In the case of the kernel headers, I suppose the
zeroes come from the non-filled parts of the ext4 blocks.

Doing strace -fc on the qemu process, 98% of the time is spent
in the lseek() system call.

That's the lseek(SEEK_DATA) followed by lseek(SEEK_HOLE) done by
find_allocation() called to find out whether sectors are within
a hole in a sparse file.

#0  lseek64 () at ../sysdeps/unix/syscall-template.S:81
#1  0x0000561287cf4ca8 in find_allocation (bs=0x7fd898d70000, hole=<synthetic 
pointer>, data=<synthetic pointer>, start=<optimized out>)
    at block/raw-posix.c:1702
#2  raw_co_get_block_status (bs=0x7fd898d70000, sector_num=<optimized out>, 
nb_sectors=40, pnum=0x7fd80dd05aac, file=0x7fd80dd05ab0) at 
#3  0x0000561287cfae92 in bdrv_co_get_block_status (bs=0x7fd898d70000, 
address@hidden, nb_sectors=40, address@hidden,
    address@hidden) at block/io.c:1709
#4  0x0000561287cfafaa in bdrv_co_get_block_status (address@hidden, 
address@hidden, nb_sectors=<optimized out>,
    address@hidden, address@hidden, address@hidden) at block/io.c:1742
#5  0x0000561287cfb0bb in bdrv_co_get_block_status_above (file=0x7fd80dd05bc0, 
pnum=0x7fd80dd05bbc, nb_sectors=40, sector_num=33974144, base=0x0,
    bs=<optimized out>) at block/io.c:1776
#6  bdrv_get_block_status_above_co_entry (address@hidden) at block/io.c:1792
#7  0x0000561287cfae08 in bdrv_get_block_status_above (bs=0x7fd898d66000, 
address@hidden, sector_num=<optimized out>, address@hidden,
    address@hidden, address@hidden) at block/io.c:1824
#8  0x0000561287cd372d in is_zero_sectors (bs=<optimized out>, start=<optimized 
out>, count=40) at block/qcow2.c:2428
#9  0x0000561287cd38ed in is_zero_sectors (count=<optimized out>, 
start=<optimized out>, bs=<optimized out>) at block/qcow2.c:2471
#10 qcow2_co_pwrite_zeroes (bs=0x7fd898d66000, offset=33974144, count=24576, 
flags=2724114573) at block/qcow2.c:2452
#11 0x0000561287cfcb7f in bdrv_co_do_pwrite_zeroes (address@hidden, 
address@hidden, address@hidden,
    address@hidden) at block/io.c:1218
#12 0x0000561287cfd0cb in bdrv_aligned_pwritev (bs=0x7fd898d66000, 
req=<optimized out>, offset=17394782208, bytes=4096, align=1, qiov=0x0,
    flags=<optimized out>) at block/io.c:1320
#13 0x0000561287cfe450 in bdrv_co_do_zero_pwritev (req=<optimized out>, 
flags=<optimized out>, bytes=<optimized out>, offset=<optimized out>,
    bs=<optimized out>) at block/io.c:1422
#14 bdrv_co_pwritev (child=0x15, offset=17394782208, bytes=4096, 
qiov=0x7fd8a25eb08d <lseek64+45>, address@hidden, flags=231758512) at 
#15 0x0000561287cefdc7 in blk_co_pwritev (blk=0x7fd898cad540, 
offset=17394782208, bytes=4096, qiov=0x0, flags=<optimized out>) at 
#16 0x0000561287cefeeb in blk_aio_write_entry (opaque=0x7fd812941440) at 
#17 0x0000561287d67c7a in coroutine_trampoline (i0=<optimized out>, 
i1=<optimized out>) at util/coroutine-ucontext.c:78

Now, performance is really bad on ZFS for those lseek().
I believe that's https://github.com/zfsonlinux/zfs/issues/4306

Until that's fixed in ZFS, I need to find a way to avoid those
lseek()s in the first place.

One way is to downgrade to 2.6.2 where those lseek()s are not
called. The change that introduced them seems to be:

(and there have been further changes to improve it later).

If I understand correctly, that change was about preventing data
from being allocated when the user is writing unaligned zeroes.

I suppose the idea is that if something is trying to write
zeroes in the middle of an _allocated_ qcow2 cluster, but the
corresponding sectors in the file underneath are in a hole, we
don't want to write those zeros as that would allocate the data
at the file level.

I can see it makes sense, but in my case, the little space
efficiency it brings is largely overshadowed by the sharp
decrease in performance.

For now, I work around it by changing the "#ifdef SEEK_DATA"
to "#if 0" in find_allocation().

Note that passing detect-zeroes=off or detect-zeroes=unmap (with
discard) doesn't help (even though FALLOC_FL_PUNCH_HOLE is
supported on ZFS on Linux).

Is there any other way I could use to prevent those lseek()s
without having to rebuild qemu?

Would you consider adding an option to disable that behaviour
(skip checking allocation at file level for qcow2 image)?


reply via email to

[Prev in Thread] Current Thread [Next in Thread]