[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Axiom-developer] Re: doyen
Re: [Axiom-developer] Re: doyen
Tue, 17 Oct 2006 08:48:47 -0400
On Monday 16 October 2006 20:05, Alejandro Jakubi wrote:
> Tim, Alfredo, CY
> > One doesn't always want to achieve that. The first question to be asked
> > should be "is this new behavior wrong, or was the old behavior wrong?"
> > (In more subtle cases - obviously a crash is wrong.) It is possible the
> > 2006 result was wrong. That's actually an objection I have heard in the
> > past to
> Correction is one issue and reproducibility is another one.
> By the way, one of my interests in looking at different CAS is checking
> results in for errors...
> But the need of reproducibility is basic in science and the publication of
> scientific results needs dating to set the record. By merging papers and
> code, the need of reproducibility of code results is made explicit. There
> are many possible sources of errors in a paper, and errors arising in bugs
> of the CAS that could have been used for calculations is just one of them.
> Whatever happens after publication, eg whether the bug is detected and
> patched, the calculations are made again, a corrected version of the paper
> is published, etc, is a different issue. Anybody should be able to able to
> reproduce a given result as it was published, right or wrong.
That's not possible in general - if nothing else, the hardware platforms on
which software is run will not remain static. A result turned out on a PDP-8
might or might not be reproducible today - doing so reliably would take too
much time and effort by way of either finding a working PDP-8 or building
one. Emulators might work but introduce more correctness questions.
That's why I think software needs to have formal statements of what is
required from the underlying support systems, and the ability to test that
those systems do in fact provide that. Give sufficient testing, it should be
workable to have a running system virtually indefinitely. (In a sense that's
what porting is, but rather than "does it run" the test could be "does
EVERYTHING work" which is better for confidence.) Lisp helps in this by
being virtually a "self contained" environment, and in gcl Paul Dietz's
extensive ANSI test suite can be found. That provides a good benchmark for
whether a lisp is performing as advertised.
I understand what you are saying, but the nature of computers makes what you
are asking for very difficult. The best we can hope to do is make computers
like (say) different X-ray diffraction setups - a result should be
reproducible from one model to another, and if something is not reproducible
step 1 is to suspect the setup of the experiment, 2 the equipment, and 3 the
result against which you are comparing your result. Being able to reproduce
incorrect results is NOT possible in general, either in computers or
> As I observe in Maple, along the late 12 years, there has been changes of
> the most diverse nature in this system. They include, patched or obsoleted
> libraries, changes in the language and in the format of the worksheet. Just
> observe that several of the Maple entries in the Rosetta document are
> currently (ie for Maple 10) obsolete. In cases that syntax is no longer
> working. And it is just a few years since it was written!
Right. I view this as a problem with how software is designed. Rather
than "the best of all possible programs" we tend to shoot for "something that
works." Experience is always the best teacher, so we learn from our
mistakes. The benefit of open source is that those old versions are still
available, and in theory they can be made to run. If you NEED to make it run
to check an inconsistency between an old and new system then there is a
problem (that shouldn't have to happen) but at least in theory it can be done
(with a bit of effort.) Commercial software does not allow this possibility.
No one, open source or closed, will maintain an old version with known
problems indefinitely just for the sake of being able to reproduce an
incorrect result - it is not a good use of limited time. The correct way is
to identify if the old result or the new is correct, and if so why. Yet
another reason I like the idea of automatically generating formal proofs.
But if you REALLY need to reproduce the old result you can (in theory and in
open source) put in the work to make it happen.
> CAS, when successful, are long term projects. This means that different
> generations of developers would work along the time, each one with its own
> preferences. What is better, whether a development following rigid rules or
> accommodating to circumstances, seems to be a matter of taste. The later
> model can be easily observed in Maple that seems as an accumulation of
> "geological strata" with commands working in very different ways, depending
> on its era of development.
To me the logical solution to that problem (insofar as it CAN be solved) is to
make working with the CAS as much as possible like working with the
mathematics itself. Sticking close to the mathematics should provide
an "oracle" for what the right way to do things is. Of course this will only
work up until a point, but you could compare it to the body of literature -
you will probably notice strata of paper writing styles over the decades and
> Most open source projects are rather new. It will be interesting to see
> whether they endure decades and how do they evolve. In particular whether a
> Axiom 2036 will handle correctly the a document written with the syntax of
> Axiom 2006.
Unlikely. I would not regard Axiom 2006 as anything like a stable product.
There is a great deal of change that we KNOW has to take place, and that
removes any realistic claim we might have to being stable in my book. Maxima
versions are also technically development versions. We have no formal
language definition for SPAD, and Maxima has no formal language definition
for it's language - expecting reproducibility is not realistic in such
conditions. It will probably be CLOSE in many cases (Axiom and Maxima do get
many things right, or they wouldn't exist in the first place) but a guarantee
is something else entirely.
> The evolution of the TeX system up to now, where almost any document
> written in the past can be processed today as it was at its time, gives
> hope that this example could be followed.
TeX was and is an unusual case - almost all of its core logic was finished
before it became a major player. LaTeX is a more realistic representation,
and older LaTeX documents can occasionally present problems in my experience.
TeX took a very long time to develop, but in the end it was finished. Most
software is not finished or even close.
I would like Axiom to be finished in terms of everything except its
mathematical abilities (just as TeX is finished except in terms of addons
like LaTeX and other convenience packages) but we aren't close yet. That
level of polish almost never happens in software development. TeX is famous
for a reason!
> > general problem. Why shouldn't it be possible to do all of this work
> > inside one larger, robust, and powerful framework? Then each new
> > algorithm and tool would be immediately available for use in any new
> > work.
> Agreed. The utopia of the universal system is very nice!
Very practical too in that it would end the duplication of effort that must go
into maintaining different systems. Who wants to see all those researchers
out there handling the mundane, boring tasks of build systems, packaging,
designing basic support environments and libraries for their calculations?
They should be doing what only they can do, not project management!
Ultimately this is all mathematics and we shouldn't be having to re-invent so
Of course I have the same feeling with graphical toolkits, and look where we
stand there :-(.
That's why it's important to study existing systems - how they do things and
more importantly why they do them that way.
> > Axiom's design gives me hope for this goal - it appears to be designed
> > generally enough that it can scale. But there are many years of work
> > ahead to make it a well documented and robust system.
> But note. The usefulness of a system may depend on factors rather
> independent of the quality of design. For instance the size and diversity
> of its community of users. Quite frequently I have found that the Maple
> package that makes the job was contributed by a user...
Yes, that's true. The only solution I can think of is to incorporate as many
useful/important ideas as possible from those communities into a system that
offers something so compelling it wouldn't be reasonable to ignore it. (Say,
formal verification of the entire system and being able to have a full
axiomatic proof of a result generated for you.)
Right now, everybody builds in swamps because that's all there is. As you
illustrated with your changing Maple versions, the ground can shift all over
the place. Axiom should strive to be solid ground - a platform so compelling
that the cost of NOT investigating and using it is too high to tolerate. It
has a good start, but won't be there for many years. Eventually I think it
could and should become the TeX of computer algebra.