[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Paralizing configure

From: Olaf Lenz
Subject: Re: Paralizing configure
Date: Wed, 09 Feb 2011 15:54:31 +0100
User-agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv: Gecko/20101208 Thunderbird/3.1.7


On 02/09/2011 04:29 AM, Bob Friesenhahn wrote:
> There are quite a lot of things to address before something exotic 
> like parallelization is considered.

I must say I disagree with most of your points.

Frankly, I do not really understand the fuzz about configure running for
a few tens of seconds. After all, you usually have to do it only once if
you are a user. If you are a developer, you have to run it only whenever
a new file is added. Is that so much of a pain?
And about the size of the configure script of a few MB, is that really
any problem in a time where we have GBs of memory and TB of hard disk space?

> For one, configure usually chooses the slowest shell on the system. 
> This happens to be a very popular GNU one.  It would be useful for 
> that shell to run much faster, or for configure to work well with a 
> shell which runs much faster.

In fact, configure uses not the slowest, but the most ancient one, which
is the simple Bourne shell, and even just a subset of that. And the
reason for that is simply that it is the only shell that is readily
available on virtually all UNIX systems. I'm coming from high
performance computing, and we often have to work with exotic or outdated
Unix dialects or machine architectures, and I'm really happy that
autoconf only uses minimal Shell features, because that is often the
only thing available by default (like the IBM shell, or Sun).

> the scriptage by 10X what the user specified.  This is not necessary 
> since common-bits can be versioned and formally installed. Devising 
> ways to make the configure script smaller should help make the 
> configure script run faster.

Again, I see the beauty and power of autoconf in the fact that the
configure script is mostly self-contained, and does have only minimal
dependencies, like a shell, a compiler and maybe some basic Unix tools
(awk, grep, ...). Any further dependencies would just make it harder.

Installing some shell code snippets somewhere in the system may sound
simple enough, but given the exteremely heterogenous Unix systems, it
would be very hard to decide where to install them so that they can be
definitely found by configure. Or would it be necessary to run a
pre-configure step to determine the location of the snippets? :-/

> Lastly, a special "shell" designed to run only configure scripts 
> would be quite useful since it could be very small, embeddable, and 
> include extensions specifically for use by configure scripts.  For 
> example, some of these extensions could implement autoconf 
> intrinsics.

The same as above, only worse. For this it would not only be necessary
to install some shell code somewhere in the system, but you would
furthermore have to port this new "shell" to any new platform. This is
basically the approach of cmake. Cmake is nice if you can use a platform
where cmake runs, but it doesn't run everywhere.

No, if it would be necessary to have other things installed to run a
configure script, it would make autoconf significantly less useful for
uncommon Unices. I'm just thinking of the BlueGene/L, where it was a
nightmare to get things like Perl or Python running, and I seriously
doubt that cmake or other alternatives worked there.

> Extensible light-weight shells like 'rc' and 'es' 
> ( can serve as an example.

No, they can't, because they are not installed on unusual platforms by

Sorry to be so much against these suggestions, but they go exactly
against everything that I like about autoconf!

Dr. rer. nat. Olaf Lenz
Institut für Computerphysik, Pfaffenwaldring 27, D-70569 Stuttgart
Phone: +49-711-685-63607

Attachment: olenz.vcf
Description: Vcard

reply via email to

[Prev in Thread] Current Thread [Next in Thread]