[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH RFC for-3.0-rc3 0/3] qemu-img: Disable copy offl

From: Fam Zheng
Subject: Re: [Qemu-devel] [PATCH RFC for-3.0-rc3 0/3] qemu-img: Disable copy offloading by default
Date: Fri, 27 Jul 2018 20:14:15 +0800

On Fri, Jul 27, 2018 at 6:29 PM Kevin Wolf <address@hidden> wrote:
> Am 27.07.2018 um 05:33 hat Fam Zheng geschrieben:
> > Kevin pointed out that both glibc and kernel provides a slow fallback of
> > copy_file_range which hurts thin provisioning. This is particularly true for
> > thin LVs, because host_device driver cannot get allocation info from the
> > volume, and copy_file_range is called on every sectors, making the dst fully
> > allocated.
> >
> > NFS mount points also doesn't support SEEK_DATA well, so the allocation
> > information is unknown to QEMU.
> >
> > That leaves only iscsi:// which seems to do what we want so far, but it is a
> > smaller use case.
> >
> > Add an option to qemu-img convert, "-C", to enable (attempting) copy 
> > offloading
> > explicitly. And mark it incompatible with "-S" and "-c".
> Reviewed-by: Kevin Wolf <address@hidden>
> Not sure why you made this an RFC only, but I think we absolutely need
> this. People are used to using 'qemu-img convert' to compact images and
> this would regress with automatic copy offloading.
> Do you think we need more discussion?

I think merging this for 3.0 is the right thing do to.

What worries me is the general usability of the feature. We could
probably explore ideas about how we can better take advantage of copy
offloading. I don't think counting on the user to make the right
decision between disk efficiency (thin provisioning) and BW efficiency
(copy offloading) will ever work. Even if we don't care about breaking
the default '-S 4k' behavior, the lack of SEEK_DATA/SEEK_HOLE support
on host NFS and block devices will make it very hard to use. Making it
worse, if the network to NFS server is good enough, convert with
pread64/pwrite64 with host page cache is also more efficient than
copy_file_range, so we'll be slower by trying to play clever. :(

Any thought?


reply via email to

[Prev in Thread] Current Thread [Next in Thread]