qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC 0/3] block/file-posix: Work around XFS bug


From: Stefan Hajnoczi
Subject: Re: [RFC 0/3] block/file-posix: Work around XFS bug
Date: Sun, 27 Oct 2019 13:35:55 +0100
User-agent: Mutt/1.12.1 (2019-06-15)

On Fri, Oct 25, 2019 at 11:58:46AM +0200, Max Reitz wrote:
> As for how we can address the issue, I see three ways:
> (1) The one presented in this series: On XFS with aio=native, we extend
>     tracked requests for post-EOF fallocate() calls (i.e., write-zero
>     operations) to reach until infinity (INT64_MAX in practice), mark
>     them serializing and wait for other conflicting requests.
> 
>     Advantages:
>     + Limits the impact to very specific cases
>       (And that means it wouldn’t hurt too much to keep this workaround
>       even when the XFS driver has been fixed)
>     + Works around the bug where it happens, namely in file-posix
> 
>     Disadvantages:
>     - A bit complex
>     - A bit of a layering violation (should file-posix have access to
>       tracked requests?)

Your patch series is reasonable.  I don't think it's too bad.

The main question is how to detect the XFS fix once it ships.  XFS
already has a ton of ioctls, so maybe they don't mind adding a
feature/quirk bit map ioctl for publishing information about bug fixes
to userspace.  I didn't see another obvious way of doing it, maybe a
mount option that the kernel automatically sets and that gets reported
to userspace?

If we imagine that XFS will not provide a mechanism to detect the
presence of the fix, then could we ask QEMU package maintainers to
./configure --disable-xfs-fallocate-beyond-eof-workaround at some point
in the future when their distro has been shipping a fixed kernel for a
while?  It's ugly because it doesn't work if the user installs an older
custom-built kernel on the host.  But at least it will cover 98% of
users...

> (3) Drop handle_alloc_space(), i.e. revert c8bb23cbdbe32f.
>     To my knowledge I’m the only one who has provided any benchmarks for
>     this commit, and even then I was a bit skeptical because it performs
>     well in some cases and bad in others.  I concluded that it’s
>     probably worth it because the “some cases” are more likely to occur.
> 
>     Now we have this problem of corruption here (granted due to a bug in
>     the XFS driver), and another report of massively degraded
>     performance on ppc64
>     (https://bugzilla.redhat.com/show_bug.cgi?id=1745823 – sorry, a
>     private BZ; I hate that :-/  The report is about 40 % worse
>     performance for an in-guest fio write benchmark.)
> 
>     So I have to ask the question about what the justification for
>     keeping c8bb23cbdbe32f is.  How much does performance increase with
>     it actually?  (On non-(ppc64+XFS) machines, obviously)
> 
>     Advantages:
>     + Trivial
>     + No layering violations
>     + We wouldn’t need to keep track of whether the kernel bug has been
>       fixed or not
>     + Fixes the ppc64+XFS performance problem
> 
>     Disadvantages:
>     - Reverts cluster allocation performance to pre-c8bb23cbdbe32f
>       levels, whatever that means

My favorite because it is clean and simple, but Vladimir has a valid
use-case for requiring this performance optimization so reverting isn't
an option.

Stefan

Attachment: signature.asc
Description: PGP signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]