[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Parallelism a la make -j <n> / GNU parallel

From: Elliott Forney
Subject: Re: Parallelism a la make -j <n> / GNU parallel
Date: Thu, 3 May 2012 13:21:24 -0600

Here is a construct that I use sometimes... although you might wind up
waiting for the slowest job in each iteration of the loop:


for iter in $(seq 1 $maxiter)
  startjob $iter &

  if (( (iter % $ncore) == 0 ))

On Thu, May 3, 2012 at 12:49 PM, Colin McEwan <address@hidden> wrote:
> Hi there,
> I don't know if this is anything that has ever been discussed or
> considered, but would be interested in any thoughts.
> I frequently find myself these days writing shell scripts, to run on
> multi-core machines, which could easily exploit lots of parallelism (eg. a
> batch of a hundred independent simulations).
> The basic parallelism construct of '&' for async execution is highly
> expressive, but it's not useful for this sort of use-case: starting up 100
> jobs at once will leave them competing, and lead to excessive context
> switching and paging.
> So for practical purposes, I find myself reaching for 'make -j<n>' or GNU
> parallel, both of which destroy the expressiveness of the shell script as I
> have to redirect commands and parameters to Makefiles or stdout, and
> wrestle with appropriate levels of quoting.
> What I would really *like* would be an extension to the shell which
> implements the same sort of parallelism-limiting / 'process pooling' found
> in make or 'parallel' via an operator in the shell language, similar to '&'
> which has semantics of *possibly* continuing asynchronously (like '&') if
> system resources allow, or waiting for the process to complete (';').
> Any thoughts, anyone?
> Thanks!
> --
> C.
> https://plus.google.com/109211294311109803299
> https://www.facebook.com/mcewanca

reply via email to

[Prev in Thread] Current Thread [Next in Thread]