automake-patches
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v4 3/3] parallel-tests: allow each test to have multiple resu


From: Ralf Wildenhues
Subject: Re: [PATCH v4 3/3] parallel-tests: allow each test to have multiple results
Date: Mon, 20 Jun 2011 22:38:55 +0200

Hi Stefano,

* Stefano Lattarini wrote on Mon, Jun 20, 2011 at 10:26:06PM CEST:
> On Monday 20 June 2011, Ralf Wildenhues wrote:
> > Why not just split the whole documentation change into a followup patch
> > then?
> >
> Because that would only postpone, not avoid, the continous tweaking and
> amending the documentation; what I'd like instead is to improve it
> organically and incrementally, in separate patches; i.e., commit a sketchy
> but correct (even if incomplete) documentation first, and then improve it
> with follow-ups (maybe handling one concept or one part at the time).

Well, then please split the incomplete doc part of your patches into
separate patches, so I can say "no" to them and "yes" to the other ones
easily.  ;-)

> > > > This will be quite fork expensive, if done in real-world code.
> > > >
> > > But this is in a script used only for testing.  I don't think
> > > it's worth optimizing it.
> > 
> > No, it's not, but your real scripts won't look all that different.
> > Besides, why not do it right the first time?
> >
> I still don't see the point this honestly, but I've thrown in a couple
> optimizations for bash, zsh and XSI shells (see attached squash-in).
> Is that enough?

Just leave that out.  You are right that doing such micro optimizing
will not at all be a good strategy, if you do it unorganized and without
planning.  Leave it for later, but still keep half an eye on the stuff
not getting like 4 times slower overhead.

> > I don't actually care much about micro optimizations like the above at
> > this point, but I do care when the whole set of code changes will
> > introduce a factor of 2 slowdown in the test suite overhead.  It looks
> > like it may eventually, judging from your measurements done, and that's
> > what I am trying to prevent.  On w32, that would cause real pain.
> >
> But maybe it would be worth trying to instead optimize stuff like
> $(am__check_pre) and $(am__vpath_adj_setup), where we could trim
> extra forks in the case of XSI shells or bash.

Doesn't sound like it would bring your project forward at this point.

I'm sorry I brought this topic up before.  I shouldn't have.

> I.e., optimize an
> existing and tested implementation instead of holding back a
> promising design due to *possible* future performance problems.

Well, I don't like this attitude.  If something will have a performance
problem, then maybe it was not all that promising after all.  I'm not
claiming your approach has, however.  All I'm suggesting is that you do
keep an eye on it.

> Also, "execing" the test driver in check2.am instead of "spawning" it
> could avoid an expensive fork.  But we should then test at configure
> time that $SHELL can gracefully handle such "execing" w.r.t. the use
> of $(TESTS_ENVIRONMENT); i.e., that an usage like:
>   "9>&2 foo=bar exec sh -c 'echo $foo >&9'"
> does the expected thing (hint: it does with dash, bash, zsh, NetBSD
> /bin/sh and Debian ksh; it doesn't with Solaris /bin/ksh and /bin/sh).

Such changes should only be done after demonstrating that they actually
cause measureable speedups.  And has no semantic changes (which I am not
so sure of).  For example the parallel BSD makes tend to reuse shells
for running the recipe commands; I'm not so sure they like if their
shells go away with exec.

Cheers,
Ralf



reply via email to

[Prev in Thread] Current Thread [Next in Thread]