Speaking of "real work on a real system" there could be several projects
that would be useful.
view2d / vew3d
For the browser-based version the graphics goal is to replace the view2d and
view3d programs with HTML5 canvas-based versions. The first step would
be to document the message patterns from axiom to view2d and back. Once
that is done and documented there would be an effort to use something like
websockets to display the messages in browser text and automate the replies.
Now that you know the messages they need to be turned into HTML5 canvas
drawing commands. Once the drawing happens there is another layer of
control because clicking on a view2d/3d image will bring up a control panel.
This could be done with menus or buttons in the web page rather than as a
separate window.
If a student wants to do this they should look at the book
"Physically Based Rendering". This book won an Academy Award.
It is the canonical example of where Axiom needs to go.
Deconstructing Hyperdoc
Hyperdoc was our pre-internet version of a browser. Some portions of it are
pre-canned scripts (some of which I've already written, as you can see from
the sources). Some portions interact with the running Axiom to send an
input line and open a DIV section with the response. I have also written some
prototype code for that in the sources.
Some page output is generated dynamically by Axiom. This needs to be
written so it creates browser-based text rather than hyperdoc text.
Removing Makefiles
We currently use 'make' to build the system. But Axiom is trying to remove
as many tools and external dependencies as possible. Each thing we remove
is one less thing users need to pre-install and one less thing to maintain.
The task would be to trace the 'make' scripts to find out the exact sequence
of commands to build Axiom. These would then be written as lisp functions.
So instead of using 'make', we start lisp and use it to build Axiom, which is
a large lisp program.
Some of this tracing is already done in anticipation that this rewrite will
happen. The Makefile files are documented (and generated from) the
Makefile.pamphlet files.
Knuth, Literate Programming, and Axiom
The 'pamphlet' files are raw latex files. You can just run latex on them.
Knuth's original vision of literate programming had 2 tools, 'weave' and
'tangle'. The 'weave' program takes the literate sources and generates
tex-compatible output. Axiom uses latex directly so there is no need for
a 'weave' program.
The 'tangle' program exists, called 'tanglec' in the books subdirectory.
It takes 2 arguments, the source latex file and the name of a 'chunk'.
It writes the chunk contents to stdout.
tanglec bookvol10.3.pamphlet "domain INT Integer" >integer.spad
This will look for a chunk with the given name, e.g.
\begin{chunk}{domain INT Integer}
the integer.spad source code is here
\end{chunk}
There is no real magic in the 'tanglec' program. It just reads the source
file, makes a hash table of every chunk name (concatenating repeated
names into a single block) and then writes out the requested chunk to
stdout.
The ultimate goal is to make every file a latex document. Most
programs use the "pile of sand" (POS) method to organize sources.
They try to provide "semantics" by clever directory names such as 'doc'.
What they don't realize is that this is left-over technology from the past.
I used to work on a machine with 8k, 4k of which was the operating system.
If you created a file larger than 4k you crashed the system. So we invented
"include" for files and "overlay linkers" (which are now virtual memory
hardware). Comments cost bytes and were frowned upon.
People STILL write 4k files and POS directories with include files despite
having 16G of memory. They still think comments are expensive and that
'documentation' is worthless. We need to think of 'explanations', not
documentation or comments. POS programming needs to die.
Explanations
If you cover a topic in class (e.g. CAD, Groebner, Berlekamp factoring, etc)