qemu-commits
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-commits] [qemu/qemu] a0441b: qemu-img: add support for rate limit


From: Peter Maydell
Subject: [Qemu-commits] [qemu/qemu] a0441b: qemu-img: add support for rate limit in qemu-img c...
Date: Fri, 30 Oct 2020 08:49:28 -0700

  Branch: refs/heads/master
  Home:   https://github.com/qemu/qemu
  Commit: a0441b66e811f24d92238e9a34f9d46b3a9058fa
      
https://github.com/qemu/qemu/commit/a0441b66e811f24d92238e9a34f9d46b3a9058fa
  Author: Zhengui <lizhengui@huawei.com>
  Date:   2020-10-27 (Tue, 27 Oct 2020)

  Changed paths:
    M docs/tools/qemu-img.rst
    M qemu-img-cmds.hx
    M qemu-img.c

  Log Message:
  -----------
  qemu-img: add support for rate limit in qemu-img commit

add support for rate limit in qemu-img commit.

Signed-off-by: Zhengui <lizhengui@huawei.com>
Message-Id: <1603205264-17424-2-git-send-email-lizhengui@huawei.com>
Reviewed-by: Alberto Garcia <berto@igalia.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>


  Commit: 0c8c4895a6a54ffb7209402b183297c80c868873
      
https://github.com/qemu/qemu/commit/0c8c4895a6a54ffb7209402b183297c80c868873
  Author: Zhengui <lizhengui@huawei.com>
  Date:   2020-10-27 (Tue, 27 Oct 2020)

  Changed paths:
    M docs/tools/qemu-img.rst
    M qemu-img-cmds.hx
    M qemu-img.c

  Log Message:
  -----------
  qemu-img: add support for rate limit in qemu-img convert

add support for rate limit in qemu-img convert.

Signed-off-by: Zhengui <lizhengui@huawei.com>
Message-Id: <1603205264-17424-3-git-send-email-lizhengui@huawei.com>
Reviewed-by: Alberto Garcia <berto@igalia.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>


  Commit: d40f4a565aa64a1ef1e1ff73caf53d61cac9a67f
      
https://github.com/qemu/qemu/commit/d40f4a565aa64a1ef1e1ff73caf53d61cac9a67f
  Author: Alberto Garcia <berto@igalia.com>
  Date:   2020-10-27 (Tue, 27 Oct 2020)

  Changed paths:
    M block/io.c

  Log Message:
  -----------
  qcow2: Report BDRV_BLOCK_ZERO more accurately in bdrv_co_block_status()

If a BlockDriverState supports backing files but has none then any
unallocated area reads back as zeroes.

bdrv_co_block_status() is only reporting this is if want_zero is true,
but this is an inexpensive test and there is no reason not to do it in
all cases.

Suggested-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Signed-off-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: 
<66fa0914a0e2b727ab6d1b63ca773d7cd29a9a9e.1603731354.git.berto@igalia.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>


  Commit: 46cd1e8a4752379b1b9d24d43d7be7d5aba03e76
      
https://github.com/qemu/qemu/commit/46cd1e8a4752379b1b9d24d43d7be7d5aba03e76
  Author: Alberto Garcia <berto@igalia.com>
  Date:   2020-10-27 (Tue, 27 Oct 2020)

  Changed paths:
    M block/io.c
    M block/qcow2.c
    M include/block/block.h

  Log Message:
  -----------
  qcow2: Skip copy-on-write when allocating a zero cluster

Since commit c8bb23cbdbe32f5c326365e0a82e1b0e68cdcd8a when a write
request results in a new allocation QEMU first tries to see if the
rest of the cluster outside the written area contains only zeroes.

In that case, instead of doing a normal copy-on-write operation and
writing explicit zero buffers to disk, the code zeroes the whole
cluster efficiently using pwrite_zeroes() with BDRV_REQ_NO_FALLBACK.

This improves performance very significantly but it only happens when
we are writing to an area that was completely unallocated before. Zero
clusters (QCOW2_CLUSTER_ZERO_*) are treated like normal clusters and
are therefore slower to allocate.

This happens because the code uses bdrv_is_allocated_above() rather
bdrv_block_status_above(). The former is not as accurate for this
purpose but it is faster. However in the case of qcow2 the underlying
call does already report zero clusters just fine so there is no reason
why we cannot use that information.

After testing 4KB writes on an image that only contains zero clusters
this patch results in almost five times more IOPS.

Signed-off-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: 
<6d77cab968c501c44d6e1089b9bc91b04170b49e.1603731354.git.berto@igalia.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>


  Commit: 1a6d3bd229d429879a85a9105fb84cae049d083c
      
https://github.com/qemu/qemu/commit/1a6d3bd229d429879a85a9105fb84cae049d083c
  Author: Greg Kurz <groug@kaod.org>
  Date:   2020-10-27 (Tue, 27 Oct 2020)

  Changed paths:
    M block.c
    M block/io.c
    M include/block/block.h
    M tests/test-bdrv-drain.c

  Log Message:
  -----------
  block: End quiescent sections when a BDS is deleted

If a BDS gets deleted during blk_drain_all(), it might miss a
call to bdrv_do_drained_end(). This means missing a call to
aio_enable_external() and the AIO context remains disabled for
ever. This can cause a device to become irresponsive and to
disrupt the guest execution, ie. hang, loop forever or worse.

This scenario is quite easy to encounter with virtio-scsi
on POWER when punching multiple blockdev-create QMP commands
while the guest is booting and it is still running the SLOF
firmware. This happens because SLOF disables/re-enables PCI
devices multiple times via IO/MEM/MASTER bits of PCI_COMMAND
register after the initial probe/feature negotiation, as it
tends to work with a single device at a time at various stages
like probing and running block/network bootloaders without
doing a full reset in-between. This naturally generates many
dataplane stops and starts, and thus many drain sections that
can race with blockdev_create_run(). In the end, SLOF bails
out.

It is somehow reproducible on x86 but it requires to generate
articial dataplane start/stop activity with stop/cont QMP
commands. In this case, seabios ends up looping for ever,
waiting for the virtio-scsi device to send a response to
a command it never received.

Add a helper that pairs all previously called bdrv_do_drained_begin()
with a bdrv_do_drained_end() and call it from bdrv_close().
While at it, update the "/bdrv-drain/graph-change/drain_all"
test in test-bdrv-drain so that it can catch the issue.

BugId: https://bugzilla.redhat.com/show_bug.cgi?id=1874441
Signed-off-by: Greg Kurz <groug@kaod.org>
Message-Id: <160346526998.272601.9045392804399803158.stgit@bahia.lan>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>


  Commit: c99fa56b95a72f6debd50a280561895d078ae020
      
https://github.com/qemu/qemu/commit/c99fa56b95a72f6debd50a280561895d078ae020
  Author: Peter Maydell <peter.maydell@linaro.org>
  Date:   2020-10-30 (Fri, 30 Oct 2020)

  Changed paths:
    M block.c
    M block/io.c
    M block/qcow2.c
    M docs/tools/qemu-img.rst
    M include/block/block.h
    M qemu-img-cmds.hx
    M qemu-img.c
    M tests/test-bdrv-drain.c

  Log Message:
  -----------
  Merge remote-tracking branch 'remotes/kevin/tags/for-upstream' into staging

Block layer patches:

- qcow2: Skip copy-on-write when allocating a zero cluster
- qemu-img: add support for rate limit in qemu-img convert/commit
- Fix deadlock when deleting a block node during drain_all

# gpg: Signature made Tue 27 Oct 2020 15:14:07 GMT
# gpg:                using RSA key DC3DEB159A9AF95D3D7456FE7F09B272C88F2FD6
# gpg:                issuer "kwolf@redhat.com"
# gpg: Good signature from "Kevin Wolf <kwolf@redhat.com>" [full]
# Primary key fingerprint: DC3D EB15 9A9A F95D 3D74  56FE 7F09 B272 C88F 2FD6

* remotes/kevin/tags/for-upstream:
  block: End quiescent sections when a BDS is deleted
  qcow2: Skip copy-on-write when allocating a zero cluster
  qcow2: Report BDRV_BLOCK_ZERO more accurately in bdrv_co_block_status()
  qemu-img: add support for rate limit in qemu-img convert
  qemu-img: add support for rate limit in qemu-img commit

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>


Compare: https://github.com/qemu/qemu/compare/d03e884e4ece...c99fa56b95a7



reply via email to

[Prev in Thread] Current Thread [Next in Thread]