parallel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Memory limits per job


From: Joe Sapp
Subject: Memory limits per job
Date: Wed, 25 Oct 2017 07:46:08 -0400

I have a situation where I know the upper limit of memory usage for
each job I'm running, but it takes a while to allocate it all (I could
--delay, but the delay amount is long and unpredictable).  I would
like to run jobs on multiple machines, each with different amounts of
RAM.  I am currently determining the total number of jobs that can run
on each machine and put that in a ssh login file, but is there a more
automatic way of doing this?  Combined with --memfree this should be
good enough.

One feature that may be useful for this case is an option that
indicates an amount of memory per job.  If given, parallel will only
start another job if the amount of free memory is greater than this
value multiplied by the number of running jobs + 1.  Parallel can
still operate under the same rules as it uses for memfree now, but
maybe this concept would have to be adjusted to be "number of jobs +
2" to avoid killing the youngest process.  What do you think?

-- 
Joe



reply via email to

[Prev in Thread] Current Thread [Next in Thread]