bug-parallel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: GNU Parallel Bug Reports Randomising Job Distribution


From: Ole Tange
Subject: Re: GNU Parallel Bug Reports Randomising Job Distribution
Date: Thu, 22 Mar 2012 13:46:02 +0100

On Wed, Mar 21, 2012 at 12:58 PM, Alastair Andrew <address@hidden> wrote:

> Is there any way to tell GNU parallel to randomly assign the jobs to machines 
> rather than just chunking them in sequence? [...]

There is no random assignment.

>  If 4 of the biggest jobs get placed on the same machine they starve each 
> other of resources and completely kill that machine.

I assume you do not know in advance which jobs are going to be big.
Also you might want to take a look at niceload --io and --start-mem
(man niceload).

> This is how I'm using parallel:
>
> parallel -S .. --nice 19 --halt-on-error 0 -j+0 --noswap "solve {1}_{2}.prob" 
> ::: small medium big ::: {1..20}

Using the current code you could:

parallel echo solve {1}_{2}.prob ::: small medium big ::: {1..20} |
shuf | parallel -S .. --nice 19 --halt-on-error 0

> I've set the processes to be nice'd as much as possible

On the email list we have discussed improvement to --load. I am
currently considering '--load auto' which should do:

ncpu = number of cores
nrunning = number of processes in R state according to `ps -A -o s`

if nrunning == ncpu: do not spawn more processes (all the CPU power is
being used - by parallel or others)
if any children are disk i/o starved:  do not spawn more processes
(disk i/o for this dir is probably all used)
else: increase the number of processes to run


/Ole



reply via email to

[Prev in Thread] Current Thread [Next in Thread]