[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] rm, du, chmod, chown, chgrp: use much less memory for large

From: Jim Meyering
Subject: Re: [PATCH] rm, du, chmod, chown, chgrp: use much less memory for large directories
Date: Tue, 23 Aug 2011 11:49:11 +0200

Voelker, Bernhard wrote:

> Jim Meyering wrote:
> +++ b/tests/rm/4-million-entry-dir
> ...
> +# Put 4M files in a directory.
> +mkdir d && cd d || framework_failure_
> +seq 4000000|xargs touch || framework_failure_
> +
> +cd ..
> +
> +# Restricted to 50MB, rm from coreutils-8.12 would fail with a
> +# diagnostic like "rm: fts_read failed: Cannot allocate memory".
> +ulimit -v 50000
> +rm -rf d || fail=1
> +
> +Exit $fail
> wouldn't this leave behind lots of used inodes in case of a failure?

No, at least I hope not.
The test is run via a framework (tests/ that creates a temporary
directory in which those commands are run, and it ( also arranges
to remove that temporary directory upon exit, interrupt, etc.

> Additionally, looking at 2 of my (not so big) SLES servers:
> most partitions only have <500000 inodes (/ /opt /usr /tmp /var),
> so maybe it's worth checking beforehand whether the filesystem
> meets the test requirements. What do you think?

If the setup phase fails (the seq...|xargs touch), then the test fails
with a diagnostic already.  And considering that the test is already
marked as "very expensive", and hence not run by default (you have to have
to set RUN_VERY_EXPENSIVE_TESTS=yes in order to run it), I think we're
ok, since I'd prefer a failure to a skip in that case.

People who take the trouble to run the very expensive tests (I'd be
surprised if there are more than a handful) probably want to know when/if
their test environment is causing test failures.

reply via email to

[Prev in Thread] Current Thread [Next in Thread]