[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

O(n^2) time complexity bug in Tar with incremental or `--delay-directory

From: Benjamin Woodruff
Subject: O(n^2) time complexity bug in Tar with incremental or `--delay-directory-restore` extractions
Date: Thu, 27 Jul 2023 11:11:22 -0700

I've attached a proposed patch to this email for the bug described here.

# Backstory

We use Tar at Meta to perform incremental nightly backups of development
servers. In the process of migrating our development environments from CentOS 8
(Tar 1.30) to CentOS 9 (Tar 1.34), a small set of our developers reported that
restoring from a backup now took multiple days. We expect that restoring a
development environment from backup to take no more than a few hours, and only a
few minutes in most cases. (Credit to our Developer Support Specialist, Akeem

Upon investigation, my coworker Rob Schmidt discovered that the developers
reporting these problems had backups that contained source code repositories
with lots of small files and directories inside. When running Tar with `strace`,
we noticed that there were unexpectedly large gaps in time between each `mkdir`
syscall. We generated a flamegraph of Tar's function calls, and discovered a
disproportionate amount of time was being spent in `delay_set_stat` and

# The Bug

Looking at the code with this knowledge in hand, it quickly became obvious why:
`delay_set_stat` contains an O(n) scan of the underlying linked list to detect
and avoid inserting duplicate entries. This code was added in
which was included in the Tar 1.33 release.

Normally, this O(n) scan isn't a problem, because the `delayed_set_stat` data
structure isn't usually very long. This is because when tar finishes extracting
a directory, it will call `apply_nonancestor_delayed_set_stat`, which will
remove the elements from that linked list.

However, because we use an incremental backup, Tar predicts we'll have an
"unusual" member order, so it sets a global `delay_directory_restore_option`
flag. This flag means that Tar no longer knows when we're actually done
processing a directory, and so it only calls
`apply_nonancestor_delayed_set_stat` during `extract_finish`.

This means that `delayed_set_stat` will eventually contain an entry for every
directory in the archive. The total extraction time becomes O(n^2), where `n`
is the number of directories.

This is similar to the bug in delayed link creation that was fixed by

# Test Case

The patch includes an automated test case, but the issue can also be manually

Here's a one-liner for creating a large directory tree with 90300 directories
(n=300 can be adjusted):

cd /tmp && mkdir z && cd z && perl -le '$n=300; for $i (1..$n) { for $j (1..$n) 
{ print "$i/$j" } }'|xargs mkdir -p

(credit to Jim Meyering for the one-liner code golf)

Tar can then be used to create and then extract this directory:

cd /tmp && tar cf z.tar z
mkdir 1 && cd 1 && time /bin/tar xf ../z.tar 

Without my patch, this takes about 42 seconds, most of which is spent in

/bin/tar xf ../z.tar --delay-directory-restore  41.73s user 0.63s system 99% 
cpu 42.594 total

With my patch, this takes about 0.3 seconds, most of which is spent performing

~/tar/src/tar xf ../z.tar --delay-directory-restore  0.09s user 0.22s system 
99% cpu 0.314 total

# The Patch

My patch mitigates this issue by introducing a hash table of the entries in
`delayed_set_stat_head`. Upon insertion, this table can be checked for
duplicates, instead of using a linear scan.

The linked list is kept because insertion order is significant and must still be

# Remaining Linear Scans

This only fixes the common case of an O(n) scan during insertion. There are
other places that perform linear scans of the table that are not fixed by this
patch. I do not have plans to fix these, in part because some of the fixes would
be more complicated, but also because they seem harder to trigger, or because I
don't believe they're problematic.

I do not have test-cases for any of these situations.

### `remove_delayed_set_stat`

This is called by `remove_any_file`, which is called in some overwrite modes.
This could be fixed by making `delayed_set_stat` data structure into a doubly
linked list to allow an O(1) removal, following an O(1) lookup in the hash

### `find_direct_ancestor`

This logic is used when handling symlinks. This could cause similar issues to
what we've observed for directories, but for extremely symlink-heavy archives.

This function performs a prefix match between the current `file_name` and
previously inserted directories. This could be rewritten to split the current
`file_name` by slash characters, and then to perform lookups in the hash table
for each parent directory.

### `repair_delayed_set_stat`

This function appears to be used for edge cases where paths with `..` or
symlinks form a cyclic directory reference.

This traverses starting with the most-recently-inserted entry, and returns when
it finds a match. This scan is probably not problematic because we expect the
match to be near the beginning of the insertion list.

### `apply_nonancestor_delayed_set_stat`

This function is not problematic because it removes entries during traversal.

When it is called frequently (when `--delay-directory-restore` is not used), the
linked list will be kept small.

When `--delay-directory-restore` is used, it is only called twice in
`extract_finish`, so the overall runtime is O(n), not O(n^2).

Attachment: o-of-n2-fix.patch
Description: Text document

reply via email to

[Prev in Thread] Current Thread [Next in Thread]