qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-block] [PATCH 0/2] block/file-posix: Reduce xfsctl() use


From: Stefano Garzarella
Subject: Re: [Qemu-block] [PATCH 0/2] block/file-posix: Reduce xfsctl() use
Date: Wed, 28 Aug 2019 11:34:41 +0200
User-agent: NeoMutt/20180716

On Fri, Aug 23, 2019 at 03:03:39PM +0200, Max Reitz wrote:
> Hi,
> 
> As suggested by Paolo, this series drops xfsctl() calls where we have
> working fallocate() alternatives.  (And thus replaces “block/file-posix:
> Fix xfs_write_zeroes()”.)
> 
> Unfortunately, we also use xfsctl() to inquire the request alignment for
> O_DIRECT, and this is the only way we currently have to obtain it
> without trying.  Therefore, I didn’t quite like removing that call, too,
> so this series doesn’t get rid of xfsctl() completely.
> 
> (If we did, we could delete 146 lines instead of these measly 76 here.)
> 
> 
> Anyway, dropping xfs_write_zeroes() will also fix the guest corruptions
> Lukáš has reported (for qcow2, but I think it should be possible to see
> similar corruptions with raw, although I haven’t investigated that too
> far).
> 
> 
> Max Reitz (2):
>   block/file-posix: Reduce xfsctl() use
>   iotests: Test reverse sub-cluster qcow2 writes
> 
>  block/file-posix.c         | 77 +-------------------------------------
>  tests/qemu-iotests/265     | 67 +++++++++++++++++++++++++++++++++
>  tests/qemu-iotests/265.out |  6 +++
>  tests/qemu-iotests/group   |  1 +
>  4 files changed, 75 insertions(+), 76 deletions(-)
>  create mode 100755 tests/qemu-iotests/265
>  create mode 100644 tests/qemu-iotests/265.out

The patch and the test LGTM.

I tried to run the 265 test without the
"block/file-posix: Reduce xfsctl() use" patch and the failure rate is ~30% on
my system.

With the patch applied the failure rate is 0% :-)

Reviewed-by: Stefano Garzarella <address@hidden>
Tested-by: Stefano Garzarella <address@hidden>

Thanks,
Stefano



reply via email to

[Prev in Thread] Current Thread [Next in Thread]