[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: GNU Parallel Bug Reports A suggestion: --shuf and -k

From: Ole Tange
Subject: Re: GNU Parallel Bug Reports A suggestion: --shuf and -k
Date: Sat, 1 Jul 2017 05:29:51 +0200

On Fri, Jun 30, 2017 at 11:11 PM, paralleluser
<address@hidden> wrote:

> --shuf does exactly what the man page says it does, but when you combine 
> --shuf and -k, the -k does nothing, --shuf rules over -k
> I'm going to propose that combining --shuf and -k that this happens:
>         the jobs are still processed randomly
>         but the output be in order as the true input

I have had exactly the same idea for the same reason.

But there is a technical reason why this will not work.

GNU Parallel buffers output in $TMPDIR. It does that by removing the
files but keeping them open. That means that for each job we use 4
file handles for buffering (2 for writing stdout and stderr and 2 for
reading). -k will therefore in periods use more file handles than no
-k would do.

This is not a problem ordinarily: If there are no more file handles to
start a new job, GNU Parallel will simply wait until the oldest job
finishes, so that -k can print this and free file handles to be used
for new jobs. In other words: It may be slower, but it will not fail.

But with --shuf the used file handles becomes a potential and
unpredictable problem: Assume you in total run more jobs than you have
file handles for (for normal systems that is > 250). Assume the
shuffling sorts them in reverse (worst case scenario). Then GNU
Parallel will start running the jobs with the highest sequence number,
but will never get to number 1 and thus never be able to print when
using -k.

In principle it could be made so that --shuf -k would work iff and
only iff there are enough file handles to buffer all jobs in parallel
(e.g. there must be fewer than 250 jobs on normal systems). I,
however, do not like that because you will typically test with less
than 250 jobs and when you one day have > 250 jobs the system will not
just be a bit slower - it will instead fail at random.

Finally, -k _does_ actually do something with --shuf - you just do not
see it very clearly - and it is hardly useful for anything. Internally
--shuf shuffles the sequence numbers and GNU Parallel then runs the
jobs in sequence. Compare the sequence column in:

  seq 30 | parallel -j100 --jl -  --shuf sleep '0.$RANDOM;true'
  seq 30 | parallel -j100 --jl - -k --shuf sleep '0.$RANDOM;true'


reply via email to

[Prev in Thread] Current Thread [Next in Thread]