[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Axiom-developer] address@hidden: Re: atan2]

From: root
Subject: [Axiom-developer] address@hidden: Re: atan2]
Date: Mon, 14 Oct 2002 10:42:29 -0400

------- Start of forwarded message -------
Date: Mon, 14 Oct 2002 07:54:54 -0400
From: root <address@hidden>
To: address@hidden
Subject: [axiom] atan2
Reply-to: address@hidden
Cc: address@hidden
X-RAVMilter-Version: 8.3.1(snapshot 20020109) (
X-UIDL: 5Y6"!l0^!!nBk!!;BG"!

(god, how i love emacs. my system crashed hard while i was typing this
and not one word of my immortal prose was lost.)

> Glad to be in touch - I had been intending to signal you but have now
> become somewhat swamped under start of term etc.

Yes, I've just started a new job (at city college of new york) and
I've had a steep learning curve to climb there. I'm on the team that
created Magnus working with a bunch of experts in infinite group theory.
While I understood this stuff centuries ago it has wilted a bit with age
so I've been reading math books in my spare time. Magnus is a special
purpose computer algebra system that is dying of "code rot" (the authors
were grad students who have left the field; the experts in the group are
not programmers). I'm hoping to keep axiom from the same fate.

> sin, cos, atan2 are in the standard C library, as in #include <math.h>.
> On SOME systems you need to link with "-lm" to pick them up, on many
> others they are there without fuss. On my Red Hat 7.3 you can find the
> actual declarations hidden in obscure mess in
> /usr/include/bits/mathcalls.h that /usr/include/math.h itself #includes.
> On some machines many of these get open-compiled when the floating point
> unit has magic to do them.

Ah, right. I could guess that but 11pm isn't conducive to insightful thinking.

> I have been finding the nested Makefiles hard to sort things out through.
> I had hoped to do test builds on Windows which is the system I run at
> home, and my next choice would have ben cygwin there. With the build
> process as messy as it is at present windows is not an easy prospect.
> Under cygwin when I do step 1, ie "make" in the development directory,
> cygwin make coredumps on me.  The linux setup says it is for glibc2.1 and
> I have 2.2 on Red Hat 7.3... it has been much harder and uphill work to
> get started than I had hoped!

Re: Literate Programming

Actually, I'm going to write up a literate document that explains the nested
makefile structure. It'll use noweb (
to document the pile. noweb is a variant of Knuth's idea of literate
programming which I plan to use to document the whole of the system.

The literate programming idea (assuming you haven't seen it) in its
simple form is that you write a document in tex that has a few special
tags of the form 
  <<something>>= ...code...@ 
which allows you to mix tex (or latex) and code. once you have the
document (I call it a pamphlet) you can run two programs against it:
  noweave foo.pamphlet >foo.tex
  notangle foo.pamphlet >foo.code
where noweave will generate the tex documentation of the code
and notangle will generate the actual running code. I've already 
rewritten the DHMATRIX domain in this form. (DHMATRIX was derived
from Richard Paul's Ph.D. thesis and he was kind enough to let me
quote directly from that document).

I'll send you the tex and output files until you have the ability
to handle pamphlets.

Anyway, I plan to write up the recursive Makefile chain the same way.
(as well as the ccl files as I need to understand them deeply anyway).

Re: Recursive Makefiles

To get you started the idea of the recursive Makefile chain is that the
base Makefile will create "global" ${FOO} variables. These variables are
added to the temporary environment ${ENV} which prefixes each recursive
call to make. the next Makefile one layer down adds yet more variables
to the ${ENV} and calls its children.

Each Makefile only knows how to make the files in its own subtree.
There is a recursive ${MAKE} call for each subtree in a directory.
Each parent Makefile has to 
  (1) set up environment variables, 
  (2) set up conditions for its children, 
  (3) build any files for which it is directly responsible 
  (4) invoke its children Makefiles (one per subdirectory), and 
  (5) clean up the mess.

The "root" makefile is a special case. It sets up truly global variables
then calls a sibling makefile for the kind of system build you want. All
of the system specific environment variables are in the siblings.
The reason for this is that the makefile tree is intended to work on NFS
mounted directories. You NFS mount the target filesystem, type 
"make whatever" and it handles the details automatically to build a
proper system for the architecture you need. It works rather well as
I was able to build systems ranging the spectrum of ibm/360, intel,
sparc, powerpc, etc in one Makefile tree. I know it seems painful
but once you understand the limited scope of each makefile it is 
rather obvious where things belong (think of scope issues in programming).

The directory structure is important also. There are 5 primary directories:
lsp, src, int, obj, and mnt.

These are divided into 5 different categories for a reason. The basic idea
is to keep the "pure" source files separate from the machine generated
files. and keep the system-dependent files separate from the 
system-independent files. The cross-product of these gives us 4 of the
5 possible directories:

 src = (system independent, human generated   e.g. .boot files)
 int = (system independent, machine generated e.g. .lsp files from .boot)
 obj = (system   dependent, machine generated e.g. .o files)
 mnt = (system   dependent, final image code  e.g. .image files)

src is code we write. It is always read-only to the machine and makefiles.

int is code the machine writes (the lisp generated from the boot code) but
will only be needed when something changes. This considerably shortens the
build process (by about 10^3) but is basically a cache. Removing this
directory will have no other effect. Normally this is mounted read-only
once the first build occurs as there is no need to write over the cache
files. There is nothing cached that depends on any particular target
architecture so we can reuse all of this work no matter what kind of
system we are building.

obj is code that depends on the target architecture, usually compiler
files like foo.o and such. This is "scratch space" for the makefiles
that allows compilers, documentation systems, and other machinery to
build up their working files. This directory can be completely removed
as the Makefiles will rebuild it if needed. It contains nothing 
permanent and does not include anything that gets shipped (although
it might be built here and copied to the final image).

mnt is the final system image for a particular target architecture.
You can copy this directory once the build completes. thus the final
executables are always under: ~/mnt/(target)/....

Using this directory structure you can have a master build system
which contains only the src directory. On the master build system
you NFS mount empty file systems under obj and mnt. Next you type
"make systemtype". The master Makefile sets up the globals, invokes
Makefile.systemtype to set up the system-special globals, and starts
the build. A side-effect of the build is to build all the subdirectory
structure in int, obj and the final mnt ship. Now you have cached work
in int you can keep, trash files in obj you can forget and a shipped
system in mnt you can run. NFS mount a new obj and a new mnt for a new
architecture and type "make nextsystem" and it all works again.

Hope this helps.

Also of interest is that I'm planning to build the system on two
different host services. There is a hidden service on
where I can make my mistakes in private and a world-avaiable service
on savannah (


------- End of forwarded message -------

reply via email to

[Prev in Thread] Current Thread [Next in Thread]