parallel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Slow start to cope with load


From: Ole Tange
Subject: Re: Slow start to cope with load
Date: Mon, 19 Mar 2012 18:30:04 +0100

On Mon, Mar 19, 2012 at 12:27 PM, Matt Oates (Home) <mattoates@gmail.com> wrote:
> On 19 March 2012 10:25, Ole Tange <tange@gnu.org> wrote:
>> On Mon, Mar 19, 2012 at 10:20 AM, Matt Oates (Home) <mattoates@gmail.com> 
>> wrote:

>>> Am I wrong in thinking you can just do -j 100% so that you never spawn
>>> more than maxload processes assuming one process load 1.0 on a single
>>> core? Can you not use -j 100% in conjunction with --load to prevent
>>> the overload on startup?
>>
>> For CPU hungry programs like 'burnP6' that would be true. But if the
>> program only uses 10% CPU (because it is waiting for network or disk
>> I/O), then we should be able to spawn more - preferably automatically
>> figuring out the "right" amount.
>
> If it is low because of blocking spawning more jobs isn't going to
> help the wait on IO.

If the I/O you are waiting for is a reply from server (which could be
caused by latency) then it often makes sense to spawn more than one
per CPU.

>>> Perhaps a flag like --is-threaded=4  or something to indicate the
>>> likely load per job?
>>
>> I am not too happy about that. I would much prefer some automated way
>> of doing-the-right-thing.
>
> If I'm already setting this manually though why do the right thing
> automatically when I know what the right thing to do is. I agree
> having parallel throttle automatically as normal is best. But it would
> be nice to explicitly state what you know if you are already
> specifying it in the job.

The --is-threaded will only make sense for CPU limited jobs.

So explain in which situations that these would not be equivalent:

-j 100% --is-threaded=4
-j 25%

>>> You are starting to get into the realm of needing to understand
>>> scheduling per host... Load might be reported for something with a
>>> different nice value than what you want to submit. So 100% load for
>>> something with <0 nice and you want to put something in for +19. In
>>> your equation above I would just add in something looking at the
>>> difference between parallel's jobs that are running and those that are
>>> ready/waiting. If all our jobs are running even under high load who
>>> cares, we have priority here so keep up with the max load. If half of
>>> our jobs are waiting then we might as well reduce spawning by half.
>>
>> I did not understand this part.
>
> Two points:
> 1.) You can have high but very low priority load. In this case we want
> a high priority job to ignore the load because it can replace it
> completely. For example updatedb is usually low nice value, when we
> come along with our job it doesn't matter if there is high load since
> we will knock updatedb off of the scheduling queue.
> 2.) You can take into account priority by just including what
> percentage of our jobs are in the "running" process state rather than
> "ready" or "waiting" state. So if there is high load and we put in 100
> processes and all of them are running, it's fine... if only 1 is
> running and the rest are just waiting then we should alter
> appropriately to that ratio until you find a natural size on the host
> machine.
>
> Hope thats a bit more clear? It just means adjusting your equation to
> something like:
>
> number_of_concurrent_jobs = max_load - current_load +
> (number_of_concurrent_jobs - number_of_concurrent_jobs_in_wait_state /
> 2)
>
> That way you quickly converge on the number of processes that can run,
> I'd ignore those that are blocked on IO, just negate the ones that are
> literally waiting on CPU.

If I understand you correctly you basically want to ignore the load
average as reported by the server, but instead compute your own, where
you ignore the jobs that are nicer than you are.

If that is what you mean I see the following problems:

* It is hard to explain what is going on (thus not adhering to
Principle of Least Astonishment).
* How do you determine what processes will be knocked off the scheduling queue?
* How do you tell that whether the job you are running is limited by
disk I/O or CPU?
* How do you tell if the running process is a (detatched)
(grand*)child of a process started by GNU Parallel and that the parent
is just waiting for the child complete?

It seems like an awful lot of complexity, but I might be wrong.


/Ole



reply via email to

[Prev in Thread] Current Thread [Next in Thread]