help-tar
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Help-tar] Extraction performance problem


From: Jakob Bohm
Subject: Re: [Help-tar] Extraction performance problem
Date: Fri, 06 Feb 2015 11:44:03 +0100
User-agent: Mozilla/5.0 (Windows NT 6.3; WOW64; rv:31.0) Gecko/20100101 Thunderbird/31.4.0

On 05/02/2015 21:23, Paul Eggert wrote:
On 02/05/2015 10:57 AM, Jakob Bohm wrote:
The default is 20 (i.e., 20 x 512 = 10 KiB).
Which happens not to be a multiple of 4Kio.

True.  The 10 KiB value has been the default since the 1970s, though; it
dates back when computers often had only 32 KiB RAM. Changing it might
break things, and we shouldn't change it without good reason.  A
sufficient performance improvement for typical uses would be a good
enough reason, but we'd need to see the numbers.


I said this for the benefit of the OP.  On many systems, he might
get a speedup by specifying e.g. -b32 (16Kio).

P.S. Last I checked the tar manual, it was unclear if the blocking
factor affects the contents of the archive (e.g. the amount of
padding between files) or just the I/O blocking.



has anyone tried to make a multi-threaded version?

Not as far as I know.  It's not clear that going multithreaded would
be worth the hassle.


I would agree, but given the typical behavior of correctly
implemented file system flush logic, it might pay to somehow
overlap the closing of extracted regular files with the
extraction of subsequent files (because close(fd) must imply
fdflush(fd) which must wait for disk I/O

POSIX doesn't require that 'close' must flush buffers to disk, and
'close' typically does not do that.  If you're on a system where 'close'
flushes buffers, perhaps you can speed things up by configuring the
system so that it doesn't flush buffers.

It is not as much a direct requirement as an indirect one.

Specifically, for local filesystems, there is no other way in
which a user mode application (such as a DB engine) can control
when its data has reached stable storage after closing the file
handle.  Some kernels may play it fast and loose and hope not
to get a power failure or kernel panic within X seconds after
returning from close().

For networked file systems, it is at least necessary for the
file data to have reached the remote server, in case the calling
app follows the close() by a network packet to a related app
which opens the file on another machine.

 It sounds like your virus
scanner is slowing you down, so I'd look into how it's configured.

It is not one virus scanner in particular, but observed common
behavior of this class of software from many vendors.

Scanning of newly written files cannot begin until the writing
is complete (by a close()), and needs to complete before the
calling app can trigger an activity which may (indirectly)
execute exploit code hidden in the new file, rebooting being
the most extreme such case. Similarly, scanning of files whose
contents is not securely known to be unchanged is needed
whenever files are accessed by open() or exec(), each time
before the file contents can be accessed.

I don't recall if any variant of clamav has such on access
scanning, otherwise this needs to be tested with non-free
software.


Some of the optimizations you mention look like they may be worth doing,
though we'd need to see benchmarks.

Indeed, this is why I mentioned some easy to reproduce test
scenarios for each one.  Specifically, the two obvious tests
would be a Linux kernel or X11 source tarball (many different
small files), and a sparse virtual machine image much larger
than memory (e.g. a half-full 256GB disk image of a virtual
machine with a complete GNU system installed and any unused
clusters sparsed out with tools such as zerofree(8) and
puncture(1)).

Run the tests on "slow" rotational disks with and without I/O
scanning tools installed.  Put the tarball somewhere fast,
like writing to /dev/null or reading from an SSD.

Realistically modern tape drives easily do 40 to 160Mio/s,
my most recent daily backup averaged 38Mio/s over 5+ hours,
and frequently stopped the tape to wait for tar to catch up.
Thus 38Mio/s was really the speed of ssh+tar+ext4+disk
supplying data.  This number is typical of the setup.


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  http://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded



reply via email to

[Prev in Thread] Current Thread [Next in Thread]