qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 2/2] qemu-img: Add dd seek= option


From: Eric Blake
Subject: Re: [Qemu-devel] [PATCH 2/2] qemu-img: Add dd seek= option
Date: Wed, 15 Aug 2018 21:57:38 -0500
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.9.1

On 08/15/2018 09:49 PM, Max Reitz wrote:

In my opinion, we do not want feature parity with dd.  What we do want
is feature parity with convert.

Well, convert is lacking a way to specify a subset of one file to move
to a (possibly different) subset of the other.  I'm fine if we want to
enhance convert to do the things that right now require a dd-alike
interface (namely, limiting the copying to less than the full file, and
choosing the offset at which to start [before this patch] or write to
[with this patch]).

Yes, I would want that.

If convert were more powerful, I'd be fine dropping 'qemu-img dd' after
a proper deprecation period.

Technically it has those features already, with the raw block driver's
offset and size parameters.

Perhaps so, but it will be an interesting exercise in rewriting the shorthand nbd://host:port/export into the proper longhand driver syntax.



Because of performance: qemu-nbd + Linux nbd device + real dd is one
more layer of data copying (each write() from dd goes to kernel, then is
sent to qemu-nbd in userspace as a socket message before being sent back
to the kernel to actually write() to the final destination) compared to
just doing it all in one process (write() lands in the final destination
with no further user space bouncing).  And because the additional steps
to set it up are awkward (see my other email where I rant about losing
the better part of today to realizing that 'dd ...; qemu-nbd -d
/dev/nbd1' loses data if you omit conv=fdatasync).

I can see the sync problems, but is the performance really that much worse?

When you don't have sparse file support, reading or writing large blocks of zeroes really is worse over /dev/nbd* than over a server/client pair that know how to do it efficiently. But for non-sparse data, I don't know if a benchmark would be able to consistently note a difference (might be a fun benchmark for someone to try, but not high on my current to-do list).

--
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3266
Virtualization:  qemu.org | libvirt.org



reply via email to

[Prev in Thread] Current Thread [Next in Thread]