[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gnu-arch-users] Managing changes to projects that use autoconf/auto

From: Tom Lord
Subject: Re: [Gnu-arch-users] Managing changes to projects that use autoconf/automake with tla
Date: Tue, 6 Apr 2004 08:49:03 -0700 (PDT)

    > From: "Stephen J. Turnbull" <address@hidden>

    >     Tom> Insert here a bitter rant about how certain vendors don't
    >     Tom> really give a crap about things they ought to :-)

    > If fish had wings, sushi chefs would shoot skeet.

Yes, but if the Romans had 0 and decimal notation they might have
discovered Calculus.

    > BTW, you have maybe an URL for that rant?


But think about it: Miles' comment was about the lack of superior
alternatives for current build systems.

When the number of projects started to proliferate and the kernel
appeared, the individual weaknesses of each package's
configure/build/install process, each only a minor inconvenience in
isolation, added up to a fairly substantial part of the difficulty of
assembling a complete GNU/Linux system.

In the early days of GNU, configure/build conventions had _started_ to
be developed and bootstrapping systems was given some direct
attention, but that effort more or less stalled.

That created an entrepreneurial opportunity which quite a few people
took up:  to do all the grunt-work of assembling a complete system;
even to make it available in binary form.   

In some ways, the failure of the technology was a great boon.  It
lowered development costs for projects that didn't have to worry about
getting configure/build/install "right";  it created the first
successful GNU/Linux business model.

Nobody ever had any incentive to fix the technology.  The businesses
grew up and the market consolidated -- but there never arose any
central point of focus for attention to configure/build/install (with
two notable semi-exceptions, see below).

You can contrast that with another technology problem: differing file
system layouts and core operating environments among competing
GNU/Linux systems.  That effected portability across these systems and
that weakness was felt most by third party (typically non-free) ISVs.
ISV dissatisfaction was felt much more directly by platform vendors
and, so: LSB.

The most prominent semi-exceptions I'm aware of are Debian and, now,
the LSB itself -- both of whom work on configure/build/install in the
form of standards and tools.  Debian goes even further and actually
does the work of porting thousands of packages to their system as a
public project.  LSB, afaict, views the build problem as an extension
of the ISV problem --- ISVs need not only a consistent operating
environment for their running applications, they need consistent build
and packaging environments to produce those applications in
distributable form.

But even those two semi-exceptions illustrate how the interests in
solving the problem shaped up -- hence, how the problem came to be
commonly understood.  Both are based on adding additional layers to
the configure/build/install stack where the additional layers:

1) Define standard, non-portable build environments.  ("Non-portable" 
   is relative.  I don't count portability among LSB-conforming
   environments as "portable".)

2) Define and implement wrapper technology for driving more or 
   less arbitrary package-specific configure/build/install processes.
   Dependencies are typically declared here and inconsistencies in
   individual packages c/b/i methods are worked-around.

3) Define and implement wrapper technology for package systems.

Underneath those layers we still have:

0) Individual package c/b/i technology, typically auto*, the Python
   configure library, Ant, etc.

I have a few problems with the stack as it has evolved.

First, layers 2 and 0 should not really be separate or, more
precisely, layer 2 should be far thinner than it is.  As it stands,
layers 0 and 2 are typically implemented by separate parties for each
package; layer 2 is typically implemented multiple times; and the work
needed to implement layer 2 is largely a matter of figuring out things
already known to the author of layer 0 and redundently expressing that
information in a standard form.  The opportunity missed here (now
especially difficult because each upstream project would have to agree
to take up the opportunity) is to build c/b/i tools which are a
pleasure to use for layer 0 needs, but which also as a side-effect
present most of what's needed for layer 2 in a useful form.

Second, there's layer 1.  Ah, layer 1.  The whole _point_ of auto* in
the first place was to minimize layer 1, principly by implementing
configure-time resource discovery.  It's arguable that auto* took off
in the wrong direction from the very first day.  Portability libraries
might have been far more effective in the long run and resulted in
cleaner code all around.  (Paradoxically, something auto*-ish _is_
needed to bootstrap that, in the c/b/i process for the portability
libraries -- but above that finite amount of code, complex
configure-time resource discovery and heavily conditionalized code is
just a nightmare and packages should be sheltered from the need for

The definition we've evolved (both formal and informal) of layer 1
means that the portability features of auto* are now quite often used
to implement portability across the entire range of glibc-using,
GNU/Linux, LSB-conforming platforms.   I.e., no portability at all.

It doesn't take much to make overreliance on layer 1 into a huge
problem.   9 packages may, individually, do a pretty good job of using
auto* correctly but then if they all depend on the same 10th package,
and that 10th package does a poor job -- then all 10 are suddenly not
portable after all.

Third, people and organizations implementing layer 2 for the core
packages a typical GNU/Linux system have that activity as their major
raison d'etre.  They may ask upstream to fix this or that c/b/i bug to
make their job easier -- but they generally don't have strategy
meetings where the agenda is "What can we ask upstreams to do that
will put us out of business?"  The closest we get to that is when 3rd
party folks send upstream maintainers like me package configuration
files and ask that they be included in distributions (an unattractive
option because they are system specific, because no provisions have
been made anywhere to maintain them, and because even if such
provisions were made it's just "extra work" -- the work of maintaining
the package configuration doesn't get re-used in ways that help the
upstream project generally).

Fourth, layer 1 plus things like the LSB operating environment
standards are led, by economic pressure, to walk a thin line.  On the
one hand, they have to be strong enough that an ISV can be assured his
code will port easily among a subset of competing GNU/Linux systems.
On the other hand, the standards have to be weak enough so that ISVs
provide certifications not for "LSB environments" but for specific
vendor environments.  Would not the alternative here be a test-based
certifications?  "If your system passes standard tests X, Y, and Z,
then our application is certified on it.  We'll support it."  And,
indeed, that's _almost_ exactly what happens except that the
"standard" tests are instead closely held private tests of particular
vendors and the certifications are for brand names.

So let's sum up:

1) Vague GNU-project motion in the direction of c/b/i standards

2) Many projects went their own way and had their own glitches.

3) Summed across an entire system, those glitches add up to a big

4) That problem can be solved deep down, with better c/b/i standards
   and tools -- or it can be worked around, perpetually, largely by
   hand.  Solving it deep down requires a decision to cooperate by
   many upstream projects led off by the development of better tools.
   Therefore, solving it by hand became one the cornerstones of the
   first GNU/Linux businesses -- it was part of their "value added".

5) Now we have a deeply ingrained c/b/i architecture which, in 
   effect, damages portability and consumes way more labor than,
   in principle, it really needs.  Vendor interests manage their parts
   of the c/b/i architecture in such a way as to protect the 
   lock-in value of their own platforms at the expense of others.

This circumstance creates a positive feedback loop that just
reinforces itself.   The effective meaning of "use auto*" has shifted
from "write very portable, easily ported code" to "make it easy for
platform vendors to write layer 2 wrappers for your package".
Consequently, many packages simply assume that they are building
against glibc, that 10 other non-standard packages are present on the
system, that executables and data files are always installed in the
same place ..... and the intertwingling of dependencies among packages
just gets worse and worse.   We've collectively gone from making
software useful to as many people as practical to make software useful
exclusively on a narrow range of GNU/Linux systems.

What if the problem were solved tomorrow -- if we could, as you put
it, stick some wings on that fish?

One can only speculate.  I speculate that alternative kernels and libc
implementations would be more viable, that the form and function of
the platform distribution business would shift away from
consolidation, that test-based certification would be more important,
that packages would be significantly less interdependent, and on and

In short, the degrees of freedom enjoyed by businesses and people
using free and open source software would be much greater.  Instead of
a contest to see who among the vendors can make the best installer or
the prettiest desktop or which can win the greatest number of ISV
certs for essentially the same system that everyone is selling ---
we'd have instead an explosion of entrepreneurial opportunities
relating to cost-effectively assembling customized, site-specific,
extensible computing environments.   The old 1980s campus model of
computing instead of the 1990s MSFT model.

Should current vendors care about that?  Should they (among other
steps) think about how to improve c/b/i practices among upstream
projects?  It seems short-sighted to me that they would not.  The
barriers to entry they are otherwise protecting will fall eventually
and, meanwhile, analysts and customers alike are starting to notice
these barriers and express dissatisfaction.  Coupled with the overall
low investments in innovative R&D of these vendors, it's a troubling
picture that actually lends credence to MSFT's rants about how free
software stifles innovation.  So, yes, these vendors should care.
Lowering customer costs (to the degree they still are) is a fine thing
to do -- and fixing c/b/i infrastructure will significantly lower
their costs and leave more of the remaining revenue for genuine
innovation rather than spending it on building and maintaining
c/b/i-layer-2 packages.


reply via email to

[Prev in Thread] Current Thread [Next in Thread]