[Top][All Lists]
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: speed up gnulib-tool a bit
From: |
Ralf Wildenhues |
Subject: |
Re: speed up gnulib-tool a bit |
Date: |
Mon, 18 Sep 2006 15:46:42 +0200 |
User-agent: |
Mutt/1.5.13 (2006-09-01) |
Hello Bruno,
* Bruno Haible wrote on Mon, Sep 18, 2006 at 02:50:40PM CEST:
> I'm applying them all, with minor tweaks regarding the whitespace placement
> (break lines before a binary operator, not after it) and double-quotes
> (put double-quotes even where they are not strictly necessary, but would be
> necessary in an argument position).
Sure; thanks. Your include rewriting is much nicer!
> Your files-rewrite.diff actually increases readability, IMO.
Good. I thought the transitive_closure patch was the most readability-
increasing (it's the only patch may give an algorithmic improvement in
the sense that may affect asymptotic complexity for some dependency
graphs, rather than just a constant factor).
> Calling func_add_or_update in a subshell is not good, because func_fatal_error
> does not do the expected thing when executed in a subshell, and also when
> we'll want to create synthetic ChangeLog entries, we need to pass detailed
> information from inside func_add_or_update to its caller.
Hmm, ok. But traditional shells execute this construct
while read foo
do
$whatever
done <$file
in a subshell, too, for example Solaris 10 sh. To be safe, I think you
need to `exec <$file', or, restoring stdin, something like this:
exec 5<&1 <$file
while read r
do
$whatever
done
exec <&5 5<&-
> Where can I download the shell script profiler that you used?
Hehe. Several crude methods, plus some experience with huge shell
scripts: Either take a slow terminal emulator and watch `sh -x script'
fly by, or dump its output, and pipe it through
sort | uniq -c | sort -k1n
wher both the short tail and any long, similar regions are interesting.
Once you're below a couple of seconds, counting the number of forks
becomes interesting for w32 systems (where current systems can still
only create a few hundred forks per second). I don't think that's an
area where gnulib-tool can profit much, though (unlike libtool, which
is called very often and runs for a rather short time).
Be aware that Korn shells reset/restore the -x flag upon function
entry/exit, but for example bash doesn't. In CVS Libtool, we put
$opt_debug
as the first command in interesting functions, and set it to `set -x'
if debugging. But for crude profiling, using bash is usually enough.
Note however that using a different shell may have a huge impact on
performance (some ksh variants are about twice as fast as older bash
when executing gnulib-tool).
And last but not least: lots of measurement. ;-)
> > PS: For the bootstrapping of gettext, it saves roughly 40s.
>
> This is indeed a figure that I cannot ignore. :-)
I think it may be possible to shave off another good part, but the next
changes probably have a higher source code change to improvement ratio
(caching func_lookup_file results could help; or rewriting func_get_*
to work on lists of modules; both seem like not too high hanging fruit).
Cheers,
Ralf
Re: speed up gnulib-tool a bit, Bruno Haible, 2006/09/18