|Subject:||Re: Emacs vista build failures|
|Date:||Tue, 15 Jul 2008 10:14:49 -0700|
|User-agent:||Thunderbird 22.214.171.124 (X11/20060808)|
David Kastrup wrote:|
The root problem with install difficulties, network config difficulties, and divergent opinions about how to lay out an emacs install is simply that unix user space and unix "best" practices for source management haven't much improved for almost two decades.Having worked with Unix and Unix-like installations for more than two decades, I can only say that you are utterly wrong.
More than two decades, eh? Like me, then, you probably cut your teeth
on BSD, BSD-based SunOS, later HP-UX, AIX, Ultrix, Irix, Solaris,
NextStep and eventually, GNU/Linux.
We could trade war stories, no doubt. It's interesting, for example,
the way that the proliferation of gratuitously different flavors of
unix led to the perceived need and then deployment of Autoconf
in the GNU project and how that in turn influenced coding practices.
Again and again we erect new structures over rotten foundations
rather than fixing the foundations. Sometimes the solution is not
*more* code but rather less code and better.
In any event, the "make" paradigm and same-but-different tools
comes from slightly before both our times. The various "/etc"
(and other location) system configuration files come from before
our time; networking configuration from around our earliest days.
The way that $PATH and later $LD_LIBRARY_PATH works is from
early in yours and my experience and, with those (and older traditions)
came the gist of contemporary directory layouts for system files.
Common version naming and numbering practices go back to then.
"patch" goes back to then. The absence of any systematic, automated
way to build dependency management in to the development framework
stems from that era.
Though not as standardized as even all those bits, the first attempts
to add automated package management and dependency management
go back to that era as well -- I seem to recall a fair number of Usenix
papers reporting on new variations and experience. The GNU/Linux
world has most recently spent about a decade recapitulating that experience --
poorly. It was a dead end then and that hasn't changed.
And in fact, it is mostly the driving force of the _free_ Unix variants that has brought forward most advances in source management, package management, and network configurations.
And those advances have been and will continue to be purely
incremental, hard-won, and perpetually (perhaps even increasingly)
fragile. And there *still* is no successful, comprehensive system
for on-line documentation and an expectation that every new serious
package *uses it*. There are still essentially as many config file syntaxes
as there are config files (only now there are more config files). There
is still, therefore, no robust, systematic way to write higher-level, user
friendly system configuration tools. There is, therefore, no robust, systematic
way to write model-checking tools to sanity check and diagnose configurations.
There is still no way to write robust, systematic, transactional and version
control management tools for configs (the oft repeated "just stick /etc under
CVS/SVN/whatever" is neither transactional or well version controlled).
The problems of system configuration were well recognized by proprietary
unix vendors and large systems shops those 20-some years ago -- that was
the state when we arrived. Little has changed, often not clearly for the
better, and only at substantial expense in volunteer labor and user frustration
(as evidenced in these threads). The proprietary vendors were constrained by
demand for (approximate, at best) upwards compatibility. The GNU project
at least had a broad intent to, sure - bootstrap via unix, keep a unix kernel,
and unix compatibility BUT - to also move on and build a new kind of user
space, homogenized around a lisp-based, systematically customizable, extensible,
and self-documenting architecture. And, on top of that ambition, some of us
at least recognized that it was equally important to well-modularize and standardize
the management of aggregated collections of separately developed source packages,
their build and installation, their installed configuration, their dependencies,
their auditing and so forth. The *only* way to solve those latter problems is with
coding and packaging standards that are stronger, more thought out, yet at least
as easy to follow as the GNU standards -- with tools to help with that.
If you want to get nostalgic at least over configuration, try Slackware one of these days. I think it is still pretty much old-spirit.
I use a distribution with an excellent reputation for being one of the best
of the breed in terms of ease of configuration, etc., and I find it to be
very much in (the worst aspects of) the "old-spirit" -- with lots of junk layered
over the old stuff to make it even worse than the worst of the old. It is fragile, ill-documented,
sprawling and ad hoc. That is unsurprising when you try to layer a simulacrum
of higher-level package management on a rotten, 30 year old foundation that
was never intended in the first place to hold up a structure of this scale.
Before all these recent "improvements" a unix admin had the problem of
grokking lots of different system configuration files and keeping them in
order. Now, with these improvements, an admin has two problems: dealing
with those files *and* not breaking any of the layered (dog-piled) modern
tools that supposedly assist in managing these files.
The early GNU/Linux vendors set out to lead the free software hackers
of the world to build a substitute for 1990s Solaris and that's exactly what
then happened only the substitute is pluralized by competing GNU/Linux
distros and in many ways none of them are as good as 1990s Solaris.
Just like the proprietary vendors as they were wrapping up serious
unix development we've wound up with some "pretty good" servers
with user applications as an afterthought, requiring serious admin talent
to keep running well, and the orginal GNU vision nowhere in sight.
Give me 10 stout hackers. Job #1 is to make a minimal (really minimal)
system for bootstrapping and then re-build user space "from scratch" (of
course, not neglecting to repurpose millions of lines of existing code -- just
not accepting a constraint of sacrificing robustness, quality, and scalability
in favor of going too fast or being gratuitously compatible with traditions
that have never and will never really work.
|[Prev in Thread]||Current Thread||[Next in Thread]|