bug-bash
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: Bug/limitation in 'time' (kernel setings?)...


From: Bruce Dawson
Subject: RE: Bug/limitation in 'time' (kernel setings?)...
Date: Wed, 20 Mar 2013 23:34:58 -0700

Hey, fascinating stuff here. I'm learning a lot of useful new tools and
techniques. I like your perl scripts.

I tend to avoid perl because I don't know it well and I'm not a fan.
Personal preference. Good point about using my 'nop' program. It might
simplify testing slightly, but shouldn't change the results.

> 2) Better, why not use the simple 'fork'/exit' which doesn't call a
separate program;

Remember that this investigation started with a real problem, and that real
problem was associated with using $(expr) for counting. fork/exit is a
totally different scenario.

When I run your first perl program -- I call it serialperl.pl because it is
a serial process where, roughly speaking, only one process is executing at a
time, it can do about 5,000 process creates per second. 'time' shows 21% CPU
usage on my four-core eight-thread system. However "iostat -c 1" shows about
13.6% CPU utilization, which corresponds to a bit more than one core. So, as
I have always claimed, 'time' is underestimating the CPU time required to
execute my task. The sampling data I provided (separate e-mail) shows that
this is mostly from page faults, which are associated with process startup.

Your second perl program -- I call it parallel perl because it runs a lot of
processes in parallel -- is trickier to measure. It runs really fast, but if
I pass too big a number to it in order to get it to run long enough for a
useful test then it kills my box. Apparently having 25,000 processes running
in parallel is too much for Ubuntu to handle, even with 32 GB of RAM. Pity.

However, I did get enough data to confirm my theory about why it runs so
much faster than serialper.pl. It's not because scheduler overhead is lower
(in fact, having 20,000 processes running in parallel is a huge burden to
the scheduler). It's because it is running in parallel. When I run
"parallelperl.pl 20000" then I see CPU usage (measured by "iostat -c 1"
running in another window) spike to ~80%. It would probably go higher if I
could get a long enough running test without DOSing my machine.

So, the reason parallelperl.pl runs faster is because it is running in
parallel.

> Note that in executing 'no program' (just doing a fork), but with the task
switching (scheduler activity), we get, at best ~ 35% cpu usage:

To be clear, we get 35% CPU usage reported by 'time'. That is entirely
different from actual total CPU usage. I think we agree about that, we're
just debating where that extra time is going. Clarity is critical in these
tricky discussions. In fact, I think it is crucial to report both what
'time' is saying the process and its children are doing, and what "iostat -c
1" says the overall system is doing during the run. Otherwise we are trying
to infer complex behavior with one eye closed. The 'time' results are only
of interest if we compare them to "iostat -c 1" or some similar system-wide
measurement.

> The difference between the fork {wait/exit} and the fork(all) wait(all);
is that the scheduler is not invoked each time through the loop.

I don't think that is a correct analysis and it is not justified by the
evidence. I think the change in performance is because the forked copies are
able to run in parallel. "iostat -c 1" strongly supports this conclusion.
Even if I'm wrong you have hardly shown that the difference is the
scheduler. To do that you would have to have sampling profiler data that
showed a majority of samples hitting in the scheduler. I've done that test
and it shows the majority of samples hitting in page faults, associated with
process startup.

I've provided significant evidence that process startup is the culprit:
- CPU scheduling graphs (pretty pictures) showing expr consuming the extra
time
- sampling data showing page faults in expr consuming the extra time

I have seen no evidence for the scheduler being the issue. If the scheduler
was the issue then that would suggest a hugely inefficient scheduler, which
is a bold claim to make.

Either way, the consistent and pervasive conclusion is that time's CPU
consumption results are significantly understated when multiple processes
are involved. It would be nice to have that acknowledged in the
documentation. If not, well, that's why I have a blog.

-----Original Message-----
From: Linda Walsh [mailto:address@hidden 
Sent: Wednesday, March 20, 2013 11:38 AM
To: Bruce Dawson
Cc: 'bug-bash'
Subject: Re: Bug/limitation in 'time' (kernel setings?)...

1) Why not use your 'null' program, which in your blog, you state makes
little difference.
2) Better, why not use the simple 'fork'/exit' which doesn't call a separate
program;  (that's
   one reason for switching to perl or better, would be C; Note the perl
prog:
#!/usr/bin/perl
die "need count" unless @ARGV;
my $count=$ARGV[0];
while ($count-- > 0) {
  my $pid;
  if (my $pid=fork()) {  # if $pid  is set, then it is the pid of a child
    waitpid($pid,0);     # so parent waits for child to finish before
continuing
  } else {               # else we are in child (or fork failed);
                         # If fork failed prog will exit
    exit 0;              # if we are in child, immediately exit (no exec);
    #exec "/bin/true";   # alternate test case to load a simple prog
  }
}

schedtool is not a script -- it is system tool to control scheduler
operations.

see also 'cpupower' to set/fix frequency and/or monitor idle-info

Note that in executing 'no program' (just doing a fork), but with the task
switching (scheduler activity), we get, at best ~ 35% cpu usage:

> time /tmp/sp.pl 10000
7.29sec 0.11usr 2.41sys (34.59% cpu)

Now change the perl program to do all the forks at once.
then wait for all of them at once.
(so the parent and child won't alternate in the scheduler).

program for that is:
#!/usr/bin/perl
die "need count" unless @ARGV;
my $count=$ARGV[0];
my $counter=$count;
while ($counter-->0) { fork() || exit(0); } while (waitpid(-1,0)>0) {};
print "counter=$counter\n";
----
Note I print counter at the end to verify that it executed the full count if
it did, it will be at "-1"; Now note the run time and cpu usage:
> time /tmp/sp1.pl 10000
counter=-1
1.50sec 0.04usr 1.43sys (98.03% cpu)

Notice its about 1/5th the time.  I still forked the same number of
processes, and still waited for all of them to exit, but this one was done
without invoking the scheduler each time through the loop.

The difference between the fork {wait/exit} and the fork(all) wait(all); is
that the scheduler is not invoked each time through the loop.

Nearly 75-80% of the time is spent in the scheduler.




reply via email to

[Prev in Thread] Current Thread [Next in Thread]