The github lockout continues...

I'm spending some time adding examples to source code.

Any function can have ++X comments added. These willin PermutationGroup there is a function 'strongGenerators'

=
appear as examples when the function is )display For example,

defined as:

=C2=A0 strongGenerators : % ->=
L PERM S

=C2=A0=C2=A0=C2=A0 ++ strongGenerators(gp) returns stro=
ng generators for

=C2=A0=C2=A0=C2=A0 ++ the group gp.

=
=C2=A0=C2=A0=C2=A0 ++

=C2=A0=C2=A0=C2=A0 ++X S:List(Integer) :=3D=
[1,2,3,4]

=C2=A0=C2=A0=C2=A0 ++X G :=3D symmetricGroup(S)

<=
div>=C2=A0=C2=A0=C2=A0 ++X strongGenerators(G)Later, in the interpreter we see:

=

)d op strongGenerat=
ors

=C2=A0 There is one exposed function called st=
rongGenerators :

=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 [1] PermutationGr=
oup(D2) -> List(Permutation(D2)) from

=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 PermutationGro=
up(D2)

=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 if D2 has SETCAT

=C2=A0 Examples of strongGenerators from PermutationGroup

=
=C2=A0

=C2=A0 S:List(Integer) :=3D [1,2,3,4]

=C2=
=A0 G :=3D symmetricGroup(S)

=C2=A0 strongGenerators(G)

This will show=
a working example for functions that the

user can copy and use. =
It is especially useful to show how

to construct working argument=
s.

These "example" functions are run at =
build time when

the make command looks like

=C2=A0=C2=
=A0=C2=A0 make TESTSET=3Dalltests

I hope to ad=
d this documentation to all Axiom functions.

In ad=
dition, the plan is to add these function calls to the

usual test=
documentation. That means that all of these examples

will be run=
and show their output in the final distribution

(mnt/ubuntu/doc/=
src/input/*.dvi files) so the user can view

the expected output.<=
br>

Tim

On Fri, Feb 25, 2022 at 6:05 PM Tim Daly <axiomcas@gmail.com> wrote:

It turns out that c= reating SPAD-looking output is trivialin Common Lisp. Each = class can have a custom printroutine so signatures and ++ commen= ts can each beprinted with their own format.To ensure that I maintain compatibility I'll be printingthe categories and domains so they look like SPAD code, at leas= t until I get the proof technology integrated. I willprobably sp= ecialize the proof printers to look like theoriginal LEAN proof = syntax.Internally, however, it will all be Common= Lisp.Common Lisp makes so many desirable feature= s so easy.It is possible to trace dynamically at any level. One = couldeven write a trace that showed how Axiom arrived at thesolution. Any domain could have special case output syntaxwithout affecting any other domain so one could write atree-lik= e output for proofs. Using greek characters is trivialso the inp= ut and output notation is more mathematical.T= imOn Thu, Feb 24, 2022 at 10:24 AM Tim Daly <axiomcas@gmail.com>= ; wrote:Axiom's SPAD code compiles to Common Lisp.The= AKCL version of Common Lisp compiles to C.Three languages and 2= compilers is a lot to maintain.Further, there are very few peop= le able to write SPADand even fewer people able to maintain it.<= br>I've decided that the SANE version of Axio= m will beimplemented in pure Common Lisp. I've outlined= Axiom'scategory / type hierarchy in the Common Lisp Obj= ectSystem (CLOS). I am now experimenting with re-writingthe functions into Common Lisp. This will h= ave several long-term effects. It simplifiesthe implementation i= ssues. SPAD code blocks a lot ofactions and optimizations that C= ommon Lisp provides.The Common Lisp language has many more peopl= ewho can read, modify, and maintain code. It provides for=interoperability with other Common Lisp projects withno = effort. Common Lisp is an international standardwhich ensures th= at the code will continue to run.The input / = output mathematics will remain the same.Indeed, with the new gen= eralizations for first-classdependent types it will be more gene= ral.This is a big change, similar to eliminating = BOOT codeand moving to Literate Programming. This will provide a=better platform for future research work. Current research=is focused on merging Axiom's computer algebra mathematicswith Lean's proof language. The goal is to create a system for =C2=A0"computational mathematics".<= div>Research is the whole point of Axiom.TimOn Sat, Jan 22, 2022 at 9:16 PM Tim Daly <= ;axiomcas@gmail.com= > wrote:=I can't stress enough how important it is to list= en to Hamming's talkAxiom will begin to die the day I stop = working on it.However, proving Axiom correct &quo= t;down to the metal", is fundamental.It will merge computer= algebra and logic, spawning years of newresearch.Work on fundamental problems.TimOn Thu, Dec 30, 2021 at 6:46 PM Tim Daly <axiomcas@gmail.com> wrote:=One of the interesting questions when obtaining a resulti= s "what functions were called and what was their return value?"Otherwise known as the "show your work" idea.=There is an idea called the "writer monad" [0], us= uallyimplemented to facilitate logging. We can exploit this=idea to provide "show your work" capability. Each func= tioncan provide this information inside the monad enabling thequestion to be answered at any time.For = those unfamiliar with the monad idea, the best explanationI'= ve found is this video [1].Tim

=[0] Deriving the writer monad from first principleshttps://williamyaoh.com/posts/2020-07-26-deriving-= writer-monad.html [1] The Absolute Best Intro = to Monads for Software EngineersOn Mon, Dec 13, 2021 at 12:30 AM Tim Daly <axiomcas@gmail.com&g= t; wrote:...(snip)...Common Lisp has a= n "open compiler". That allows the abilityto deeply mo= dify compiler behavior using compiler macrosand macros in genera= l. CLOS takes advantage of this to addtyped behavior into the co= mpiler in a way that ALLOWS stricttyping such as found in constr= uctive type theory and ML.Judgments, ala Crary, are front-and-ce= nter.Whether you USE the discipline afforded = is the real question.Indeed, the Axiom research s= truggle is essentially one of howto have a disciplined use of fi= rst-class dependent types. Thestruggle raises issues of, for exa= mple, compiling a dependenttype whose argument is recursive in t= he compiled type. Sincethe new type is first-class it can be con= structed at what youimproperly call "run-time". Howeve= r, it appears that the recursivetype may have to call the compil= er at each recursion to generatethe next step since in some case= s it cannot generate "closed code".= I am embedding proofs (in LEAN language) into the typehierarchy = so that theorems, which depend on the type hierarchy,are cor= rectly inherited. The compiler has to check the proofs of functionsat compile time using these. Hacking up nonsense just won't cut it. = Think of the problem of embedding LEAN proofs in ML or ML in LEAN= .(Actually, Jeremy Avigad might find that research interesting.)=So Matrix(3,3,Float) has inverses (assuming Float= is afield (cough)). The type inherits this theorem and proofs o= ffunctions can use this. But Matrix(3,4,Integer) does not haveinverses so the proofs cannot use this. The type hierarchy hasto ensure that the proper theorems get inherited.

=Making proof technology work at compile time is hard.= (Worse yet, LEAN is a moving target. Sigh.)On = Thu, Nov 25, 2021 at 9:43 AM Tim Daly <axiomcas@gmail.com> wrote: =As you know I've been re-architect= ing Axiom to use first classdependent types and proving the algo= rithms correct. For example,the GCD of natural numbers or the GC= D of polynomials.The idea involves "boxing u= p" the proof with the algorithm (akaproof carrying code) in= the ELF file (under a crypto hash so itcan't be changed).Once the code is running on the CPU, the proof is r= un in parallelon the field programmable gate array (FPGA). Intel= data centerservers have CPUs with built-in FPGAs these days.There is a bit of a disconnect, though. The GCD code= is compiledmachine code but the proof is LEAN-level.=What would be ideal is if the compiler not only compiled the= GCDcode to machine code, it also compiled the proof to "ma= chine code".That is, for each machine instruction, the FPGA= proof checkerwould ensure that the proof was not violated at th= e individualinstruction level.What= does it mean to "compile a proof to the machine code level"?The Milawa effort (Myre14.pdf) does incremental proof= s in layers.To quote from the article [0]:=C2=A0=C2=A0 We begin with a simple proof checker, call it A, which= is short=C2=A0=C2=A0 enough to verify by the ``social process&#= 39;' of mathematics -- and=C2=A0 more recently with a theore= m prover for a more expressive logic.=C2=A0=C2=A0= We then develop a series of increasingly powerful proof checkers,=C2=A0 call the B, C, D, and so on. We show each of these programs only =C2=A0=C2=A0 accepts the same formulas as A, using A to verify B, = and B to verify=C2=A0=C2=A0 C, and so on. Then, since we trust A= , and A says B is trustworthy, we=C2=A0=C2=A0 can trust B. Then,= since we trust B, and B says C is trustworthy, we=C2=A0=C2=A0 c= an trust C.This gives a technique for "= compiling the proof" down the the machinecode level. Ideall= y, the compiler would have judgments for each step ofthe compila= tion so that each compile step has a justification. I don'tk= now of any compiler that does this yet. (References welcome).At the machine code level, there are techniques that w= ould allowthe FPGA proof to "step in sequence" with th= e executing code.Some work has been done on using "Hoar= e Logic for RealisticallyModelled Machine Code" (paper atta= ched, Myre07a.pdf),"Decompilation into Logic -- Improved (M= yre12a.pdf).So the game is to construct a GCD ove= r some type (Nats, Polys, etc.Axiom has 22), compile the depende= nt type GCD to machine code.In parallel, the proof of the code i= s compiled to machine code. Thepair is sent to the CPU/FPGA and,= while the algorithm runs, the FPGAensures the proof is not viol= ated, instruction by instruction.(I'm ignorin= g machine architecture issues such pipelining, out-of-order,bran= ch prediction, and other machine-level things to ponder. I'm lookingat the RISC-V Verilog details by various people to understand bette= r butit is still a "misty fog" for me.)=The result is proven code "down to the metal".TimOn Thu, Nov 25, 2021 at 6:05 AM Tim Daly &l= t;axiomcas@gmail.co= m> wrote:As you know I've been re-ar= chitecting Axiom to use first classdependent types and proving t= he algorithms correct. For example,the GCD of natural numbers or= the GCD of polynomials.The idea involves "b= oxing up" the proof with the algorithm (akaproof carrying c= ode) in the ELF file (under a crypto hash so itcan't be chan= ged).Once the code is running on the CPU, the pro= of is run in parallelon the field programmable gate array (FPGA)= . Intel data centerservers have CPUs with built-in FPGAs these d= ays.There is a bit of a disconnect, though. The G= CD code is compiledmachine code but the proof is LEAN-level.What would be ideal is if the compiler not only compi= led the GCDcode to machine code, it also compiled the proof to &= quot;machine code".That is, for each machine instruction, t= he FPGA proof checkerwould ensure that the proof was not violate= d at the individualinstruction level.What does it mean to "compile a proof to the machine code level&quo= t;? The Milawa effort (Myre14.pdf) does incrementa= l proofs in layers.To quote from the article [0]:==C2=A0=C2=A0 We begin with a simple proof checker, call it A= , which is short=C2=A0=C2=A0 enough to verify by the ``social pr= ocess'' of mathematics -- and=C2=A0 more recently with a= theorem prover for a more expressive logic.=C2= =A0=C2=A0 We then develop a series of increasingly powerful proof checkers,==C2=A0 call the B, C, D, and so on. We show each of these progra= ms only=C2=A0=C2=A0 accepts the same formulas as A, using A to v= erify B, and B to verify=C2=A0=C2=A0 C, and so on. Then, since w= e trust A, and A says B is trustworthy, we=C2=A0=C2=A0 can trust= B. Then, since we trust B, and B says C is trustworthy, we=C2= =A0=C2=A0 can trust C.This gives a technique= for "compiling the proof" down the the machinecode le= vel. Ideally, the compiler would have judgments for each step of= the compilation so that each compile step has a justification. I don't<= /div>know of any compiler that does this yet. (References welcome).At the machine code level, there are techni= ques that would allowthe FPGA proof to "step in sequence&qu= ot; with the executing code.Some work has been done on using= "Hoare Logic for RealisticallyModelled Machine Code" = (paper attached, Myre07a.pdf),"Decompilation into Logic -- = Improved (Myre12a.pdf).So the game is to construc= t a GCD over some type (Nats, Polys, etc.Axiom has 22), compile = the dependent type GCD to machine code.In parallel, the proof of= the code is compiled to machine code. Thepair is sent to the CP= U/FPGA and, while the algorithm runs, the FPGAensures the proof = is not violated, instruction by instruction.(I= 9;m ignoring machine architecture issues such pipelining, out-of-order,branch prediction, and other machine-level things to ponder. I'm= lookingat the RISC-V Verilog details by various people to under= stand better butit is still a "misty fog" for me.)

=The result is proven code "down to the metal= ".Tim <= div dir=3D"ltr" class=3D"gmail_attr">On Sat, Nov 13, 2021 at 5:28 PM Tim Da= ly <axiomcas@gma= il.com> wrote:Full support for general, first-class dependent= types requiressome changes to the Axiom design. That implies so= me languagedesign questions.Given that= mathematics is such a general subject with a lot of"local&= quot; notation and ideas (witness logical judgment notation)care= ful thought is needed to design a language that is able tohandle= a wide range.Normally language design is a two-l= evel process. The languagedesigner creates a language and then a= n implementation. Variousdesign choices affect the final languag= e.There is "The Metaobject Protocol"= ; (MOP)which= encourages a three-level process. The language designerwor= ks at a Metalevel to design a family of languages, then thelangu= age specializations, then the implementation. A MOP designallows= the language user to optimize the language to their problem.A simple paper on the subject is "Metaobject Protocols&quo= t;TimOn Mon, Oct 25, 2021 at 7:42 PM Tim Dal= y <axiomcas@gmai= l.com> wrote: I have a separate thread of research on Self-Rep= licating Systems(ref: Kinematics of Self Reproducing Machineswhich led to watching "Strange Dreams of Stranger Loops" = by Will Byrdhttps://www.youtube.com/watch?v=3DAffW-7ika0EWill referenced a PhD Thesis by Jon Doyle"A Model for Deliberation, Action, and Introspection"=I also read the thesis by J.C.G. Sturdy"A Li= sp through the Looking Glass"Self-replicatio= n requires the ability to manipulate your ownrepresentation in s= uch a way that changes to that representationwill change behavio= r.This leads to two thoughts in the SANE research= .First, "Declarative Representation". T= hat is, most of the thingsabout the representation should be dec= larative rather thanprocedural. Applying this idea as much as po= ssible makes iteasier to understand and manipulate. Second, "Explicit Call Stack". Function calls fo= rm an implicitcall stack. This can usually be displayed in a run= ning lisp system.However, having the call stack explicitly avail= able would meanthat a system could "introspect" at the= first-class level.These two ideas would make it = easy, for example, to let thesystem "show the work". O= ne of the normal complaints is thata system presents an answer b= ut there is no way to know howthat answer was derived. These two= ideas make it possible tounderstand, display, and even post-ans= wer manipulatethe intermediate steps.H= aving the intermediate steps also allows proofs to beinserted in= a step-by-step fashion. This aids the effort tohave proofs run = in parallel with computation at the hardwarelevel.Tim=

On Thu, Oct 21, 2021 at 9:50 AM Tim Daly <axiomcas@gmail.com&= gt; wrote:So the current struggle involves the categories in Axiom.<= /div>The categories and domains constructed using categ= oriesare dependent types. When are dependent types "equal&q= uot;?Well, hummmm, that depends on the arguments to theconstructor. But in order to decide(?) equality = we have to evaluatethe arguments (which themselves can be depend= ent types).Indeed, we may, and in general, we must evaluate the =arguments at compile time (well, "construction time&quo= t; asthere isn't really a compiler / interpreter separation = anymore.)That raises the question of what &qu= ot;equality" means. Thisis not simply a "set equality&= quot; relation. It falls into theinfinite-groupoid of homotopy t= ype theory. In generalit appears that deciding category / domain= equivalencemight force us to climb the type hierarchy. Beyond that, there is the question of "which proof&qu= ot;applies to the resulting object. Proofs depend on their=assumptions which might be different for differentconstruct= ions. As yet I have no clue how to "index"proofs based= on their assumptions, nor how toconnect these assumptions = to the groupoid structure.My brain hurts. TimOn Mon, Oct 18, 2021 at 2:00 AM T= im Daly <axiomca= s@gmail.com> wrote:"Birthing Computational Mathematics&q= uot;The Axiom SANE project is difficult at a very= fundamentallevel. The title "SANE" was chosen due to = the variouswords found in a thesuarus... "rational", &= quot;coherent","judicious" and "sound".=These are very high level, amorphous ideas. But s= o isthe design of SANE. Breaking away from tradition incomputer algebra, type theory, and proof assistants is very dif= ficult. Ideas tend to fall into standard jargonwhich limits both= the frame of thinking (e.g. dependenttypes) and the content (e.= g. notation).Questioning both frame and content i= s very difficult.It is hard to even recognize when they are acce= pted"by default" rather than "by choice". Wh= at does the idea"power tools" mean in a primitive,= hand labor culture?Christopher Alexander [0]= addresses this problem ina lot of his writing. Specifically, in= his book "Notes onthe Synthesis of Form", in his chap= ter 5 "The SelfconsiousProcess", he addresses this pro= blem directly. This is a"must read" book. Unlike building design and contruction, however, thereare almost no constraints to use as guides. Alexanderquot= es Plato's Phaedrus:=C2=A0 "First, the t= aking in of scattered particulars under=C2=A0=C2=A0 one Idea, so= that everyone understands what is being=C2=A0=C2=A0 talked abou= t ... Second, the separation of the Idea=C2=A0=C2=A0 into parts,= by dividing it at the joints, as nature=C2=A0=C2=A0 directs, no= t breaking any limb in half as a bad=C2=A0=C2=A0 carver mig= ht."Lisp, which has been called "cl= ay for the mind" canbuild virtually anything that can be th= ought. The"joints" are also "of one's ch= oosing" so one isboth carver and "nature".Clearly the problem is no longer "the tools&quo= t;.*I* am the problem constraining the solution.Birth= ing this "new thing" is slow, difficult, anduncertain = at best.Tim[0] Alexande= r, Christopher "Notes on the Synthesisof Form" Harvard= University Press 1964ISBN 0-674-62751-2On Sun, Oct 10, 2021 at 4:40 PM Tim Daly <axiomcas@gmail.com> wrote:Re: writing a paper... I'= m not connected to Academia

so anything I'd write would never make it into print.

"Language level parsing" is still a long way off. The talk

by Guy Steele [2] highlights some of the problems we

currently face using mathematical metanotation.

For example, a professor I know at CCNY (City College

of New York) didn't understand Platzer's "funny

fraction notation" (proof judgements) despite being

an expert in Platzer's differential equations area.

Notation matters and is not widely common.

I spoke to Professor Black (in LTI) about using natural

language in the limited task of a human-robot cooperation

in changing a car tire.=C2=A0 I looked at the current machine

learning efforts. They are no where near anything but

toy systems, taking too long to train and are too fragile.

Instead I ended up using a combination of AIML [3]

(Artificial Intelligence Markup Language), the ALICE

Chatbot [4], Forgy's OPS5 rule based program [5],

and Fahlman's SCONE [6] knowledge base. It was

much less fragile in my limited domain problem.

I have no idea how to extend any system to deal with

even undergraduate mathematics parsing.

Nor do I have any idea how I would embed LEAN

knowledge into a SCONE database, although I

think the combination would be useful and interesting.

I do believe that, in the limited area of computational

mathematics, we are capable of building robust, proven

systems that are quite general and extensible. As you

might have guessed I've given it a lot of thought over

the years :-)

A mathematical language seems to need >6 components

1) We need some sort of a specification language, possibly

somewhat 'propositional' that introduces the assumptions

you mentioned (ref. your discussion of numbers being

abstract and ref. your discussion of relevant choice of

assumptions related to a problem).

This is starting to show up in the hardware area (e.g.

Lamport's TLC[0])

Of course, specifications relate to proving programs

and, as you recall, I got a cold reception from the

LEAN community about using LEAN for program proofs.

2) We need "scaffolding". That is, we need a theory

that can be reduced to some implementable form

that provides concept-level structure.

Axiom uses group theory for this. Axiom's "category"

structure has "Category" things like Ring. Claiming

to be a Ring brings in a lot of "Signatures" of functions

you have to implement to properly be a Ring.

Scaffolding provides a firm mathematical basis for

design. It provides a link between the concept of a

Ring and the expectations you can assume when

you claim your "Domain" "is a Ring". Category

theory might provide similar structural scaffolding

(eventually... I'm still working on that thought garden)

LEAN ought to have a textbook(s?) that structures

the world around some form of mathematics. It isn't

sufficient to say "undergraduate math" is the goal.

There needs to be some coherent organization so

people can bring ideas like Group Theory to the

organization. Which brings me to ...

3) We need "spreading". That is, we need to take

the various definitions and theorems in LEAN and

place them in their proper place in the scaffold.

For example, the Ring category needs the definitions

and theorems for a Ring included in the code for the

Ring category. Similarly, the Commutative category

needs the definitions and theorems that underlie

"commutative" included in the code.

That way, when you claim to be a "Commutative Ring"

you get both sets of definitions and theorems. That is,

the inheritance mechanism will collect up all of the

definitions and theorems and make them available

for proofs.

I am looking at LEAN's definitions and theorems with

an eye to "spreading" them into the group scaffold of

Axiom.

4) We need "carriers" (Axiom calls them representations,

aka "REP"). REPs allow data structures to be defined

independent of the implementation.

For example, Axiom can construct Polynomials that

have their coefficients in various forms of representation.

You can define "dense" (all coefficients in a list),

"sparse" (only non-zero coefficients), "recursive", etc= .

A "dense polynomial" and a "sparse polynomial" work

exactly the same way as far as the user is concerned.

They both implement the same set of functions. There

is only a difference of representation for efficiency and

this only affects the implementation of the functions,

not their use.

Axiom "got this wrong" because it didn't sufficiently

separate the REP from the "Domain". I plan to fix this.

LEAN ought to have a "data structures" subtree that

has all of the definitions and axioms for all of the

existing data structures (e.g. Red-Black trees). This

would be a good undergraduate project.

5) We need "Domains" (in Axiom speak). That is, we

need a box that holds all of the functions that implement

a "Domain". For example, a "Polynomial Domain" would

hold all of the functions for manipulating polynomials

(e.g polynomial multiplication). The "Domain" box

is a dependent type that:

=C2=A0 A) has an argument list of "Categories" that this "Do= main"

=C2=A0 =C2=A0 =C2=A0 box inherits. Thus, the "Integer Domain" inh= erits

=C2=A0 =C2=A0 =C2=A0 the definitions and axioms from "Commutative"= ;

=C2=A0 =C2=A0 =C2=A0Functions in the "Domain" box can now assume<= br> =C2=A0 =C2=A0 =C2=A0and use the properties of being commutative. Proofs

=C2=A0 =C2=A0 =C2=A0of functions in this domain can use the definitions

=C2=A0 =C2=A0 =C2=A0and proofs about being commutative.

=C2=A0 B) contains an argument that specifies the "REP"

=C2=A0 =C2=A0 =C2=A0 =C2=A0(aka, the carrier). That way you get all of the<= br> =C2=A0 =C2=A0 =C2=A0 =C2=A0functions associated with the data structure

=C2=A0 =C2=A0 =C2=A0 available for use in the implementation.

=C2=A0 =C2=A0 =C2=A0 Functions in the Domain box can use all of

=C2=A0 =C2=A0 =C2=A0 the definitions and axioms about the representation

=C2=A0 =C2=A0 =C2=A0 (e.g. NonNegativeIntegers are always positive)

=C2=A0 C) contains local "spread" definitions and axioms

=C2=A0 =C2=A0 =C2=A0 =C2=A0that can be used in function proofs.

=C2=A0 =C2=A0 =C2=A0 For example, a "Square Matrix" domain would<= br> =C2=A0 =C2=A0 =C2=A0 have local axioms that state that the matrix is

=C2=A0 =C2=A0 =C2=A0 always square. Thus, functions in that box could

=C2=A0 =C2=A0 =C2=A0 use these additional definitions and axioms in

=C2=A0 =C2=A0 =C2=A0 function proofs.

=C2=A0 D) contains local state. A "Square Matrix" domain

=C2=A0 =C2=A0 =C2=A0 =C2=A0would be constructed as a dependent type that

=C2=A0 =C2=A0 =C2=A0 =C2=A0specified the size of the square (e.g. a 2x2

=C2=A0 =C2=A0 =C2=A0 =C2=A0matrix would have '2' as a dependent par= ameter.

=C2=A0 E) contains implementations of inherited functions.

=C2=A0 =C2=A0 =C2=A0 =C2=A0A "Category" could have a signature fo= r a GCD

=C2=A0 =C2=A0 =C2=A0 =C2=A0function and the "Category" could have= a default

=C2=A0 =C2=A0 =C2=A0 =C2=A0implementation. However, the "Domain" = could

=C2=A0 =C2=A0 =C2=A0 =C2=A0have a locally more efficient implementation whi= ch

=C2=A0 =C2=A0 =C2=A0 =C2=A0overrides the inherited implementation.

=C2=A0 =C2=A0 =C2=A0 Axiom has about 20 GCD implementations that

=C2=A0 =C2=A0 =C2=A0 differ locally from the default in the category. They<= br> =C2=A0 =C2=A0 =C2=A0 use properties known locally to be more efficient.

=C2=A0 F) contains local function signatures.

=C2=A0 =C2=A0 =C2=A0 A "Domain" gives the user more and more uniq= ue

=C2=A0 =C2=A0 =C2=A0 functions. The signature have associated

=C2=A0 =C2=A0 =C2=A0 "pre- and post- conditions" that can be used=

=C2=A0 =C2=A0 =C2=A0 as assumptions in the function proofs.

=C2=A0 =C2=A0 =C2=A0 Some of the user-available functions are only

=C2=A0 =C2=A0 =C2=A0 visible if the dependent type would allow them

=C2=A0 =C2=A0 =C2=A0 to exist. For example, a general Matrix domain

=C2=A0 =C2=A0 =C2=A0 would have fewer user functions that a Square

=C2=A0 =C2=A0 =C2=A0 Matrix domain.

=C2=A0 =C2=A0 =C2=A0 In addition, local "helper" functions need t= heir

=C2=A0 =C2=A0 =C2=A0 own signatures that are not user visible.

=C2=A0 G) the function implementation for each signature.

=C2=A0 =C2=A0 =C2=A0 =C2=A0This is obviously where all the magic happens

=C2=A0 H) the proof of each function.

=C2=A0 =C2=A0 =C2=A0 =C2=A0This is where I'm using LEAN.

=C2=A0 =C2=A0 =C2=A0 =C2=A0Every function has a proof. That proof can use=C2=A0 =C2=A0 =C2=A0 =C2=A0all of the definitions and axioms inherited from=

=C2=A0 =C2=A0 =C2=A0 =C2=A0the "Category", "Representation&q= uot;, the "Domain

=C2=A0 =C2=A0 =C2=A0 =C2=A0Local", and the signature pre- and post-

=C2=A0 =C2=A0 =C2=A0 =C2=A0conditions.

=C2=A0 =C2=A0I) literature links. Algorithms must contain a link

=C2=A0 =C2=A0 =C2=A0 to at least one literature reference. Of course,

=C2=A0 =C2=A0 =C2=A0 since everything I do is a Literate Program

=C2=A0 =C2=A0 =C2=A0 this is obviously required. Knuth said so :-)

LEAN ought to have "books" or "pamphlets" that

bring together all of this information for a domain

such as Square Matrices. That way a user can

find all of the related ideas, available functions,

and their corresponding proofs in one place.

6) User level presentation.

=C2=A0 =C2=A0 This is where the systems can differ significantly.

=C2=A0 =C2=A0 Axiom and LEAN both have GCD but they use

=C2=A0 =C2=A0 that for different purposes.

=C2=A0 =C2=A0 I'm trying to connect LEAN's GCD and Axiom's GCD<= br> =C2=A0 =C2=A0 so there is a "computational mathematics" idea that=

=C2=A0 =C2=A0 allows the user to connect proofs and implementations.

7) Trust

Unlike everything else, computational mathematics

can have proven code that gives various guarantees.

I have been working on this aspect for a while.

I refer to it as trust "down to the metal" The idea is

that a proof of the GCD function and the implementation

of the GCD function get packaged into the ELF format.

(proof carrying code). When the GCD algorithm executes

on the CPU, the GCD proof is run through the LEAN

proof checker on an FPGA in parallel.

(I just recently got a PYNQ Xilinx board [1] with a CPU

and FPGA together. I'm trying to implement the LEAN

proof checker on the FPGA).

We are on the cusp of a revolution in computational

mathematics. But the two pillars (proof and computer

algebra) need to get know each other.

Tim

[0] Lamport, Leslie "Chapter on TLA+"

in "Software Specification Methods"

https://www.springer.com/gp/book/9781852333539

(I no longer have CMU library access or I'd send you

the book PDF)

[1] https://www.tul.com.tw/productspynq-z2.html

[2] https://www.youtube.com/watch?v=3DdCuZkaaou0Q

[3] "ARTIFICIAL INTELLIGENCE MARKUP LANGUAGE"

https://arxiv.org/pdf/1307.3091.pdf

[4] ALICE Chatbot

http://www.scielo.org.mx/pdf/cys= /v19n4/1405-5546-cys-19-04-00625.pdf

[5] OPS5 User Manual

https://kilthub.cm= u.edu/articles/journal_contribution/OPS5_user_s_manual/6608090/1

[6] Scott Fahlman "SCONE"

http://www.cs.cmu.edu/~sef/scone/

On 9/27/21, Tim Daly <axiomcas@gmail.com> wrote:

> I have tried to maintain a list of names of people who have

> helped Axiom, going all the way back to the pre-Scratchpad

> days. The names are listed at the beginning of each book.

> I also maintain a bibliography of publications I've read or

> that have had an indirect influence on Axiom.

>

> Credit is "the coin of the realm". It is easy to share and w= rong

> to ignore. It is especially damaging to those in Academia who

> are affected by credit and citations in publications.

>

> Apparently I'm not the only person who feels that way. The ACM

> Turing award seems to have ignored a lot of work:

>

> Scientific Integrity, the 2021 Turing Lecture, and the 2018 Turing

> Award for Deep Learning

> https://pe= ople.idsia.ch/~juergen/scientific-integrity-turing-award-deep-learning.html=

>

> I worked on an AI problem at IBM Research called Ketazolam.

> (https://en.wikipedia.org/wiki/Ketazolam). The idea = was to recognize

> and associated 3D chemical drawings with their drug counterparts.

> I used Rumelhart, and McClelland's books. These books contained

> quite a few ideas that seem to be "new and innovative" among= the

> machine learning crowd... but the books are from 1987. I don't bel= ieve

> I've seen these books mentioned in any recent bibliography.

> https://mitpress.mit.edu= /books/parallel-distributed-processing-volume-1

>

>

>

>

> On 9/27/21, Tim Daly <axiomcas@gmail.com> wrote:

>> Greg Wilson asked "How Reliable is Scientific Software?"=

>> https://ne= verworkintheory.org/2021/09/25/how-reliable-is-scientific-software.html=

>>

>> which is a really interesting read. For example"

>>

>>=C2=A0 [Hatton1994], is now a quarter of a century old, but its con= clusions

>> are still fresh. The authors fed the same data into nine commercia= l

>> geophysical software packages and compared the results; they found=

>> that, "numerical disagreement grows at around the rate of 1% = in

>> average absolute difference per 4000 fines of implemented code, an= d,

>> even worse, the nature of the disagreement is nonrandom" (i.e= ., the

>> authors of different packages make similar mistakes).

>>

>>

>> On 9/26/21, Tim Daly <axiomcas@gmail.com> wrote:

>>> I should note that the lastest board I've just unboxed

>>> (a PYNQ-Z2) is a Zynq Z-7020 chip from Xilinx (AMD).

>>>

>>> What makes it interesting is that it contains 2 hard

>>> core processors and an FPGA, connected by 9 paths

>>> for communication. The processors can be run

>>> independently so there is the possibility of a parallel

>>> version of some Axiom algorithms (assuming I had

>>> the time, which I don't).

>>>

>>> Previously either the hard (physical) processor was

>>> separate from the FPGA with minimal communication

>>> or the soft core processor had to be created in the FPGA

>>> and was much slower.

>>>

>>> Now the two have been combined in a single chip.

>>> That means that my effort to run a proof checker on

>>> the FPGA and the algorithm on the CPU just got to

>>> the point where coordination is much easier.

>>>

>>> Now all I have to do is figure out how to program this

>>> beast.

>>>

>>> There is no such thing as a simple job.

>>>

>>> Tim

>>>

>>>

>>> On 9/26/21, Tim Daly <axiomcas@gmail.com> wrote:

>>>> I'm familiar with most of the traditional approaches>>>> like Theorema. The bibliography contains most of the

>>>> more interesting sources. [0]

>>>>

>>>> There is a difference between traditional approaches to

>>>> connecting computer algebra and proofs and my approach.

>>>>

>>>> Proving an algorithm, like the GCD, in Axiom is hard.

>>>> There are many GCDs (e.g. NNI vs POLY) and there

>>>> are theorems and proofs passed at runtime in the

>>>> arguments of the newly constructed domains. This

>>>> involves a lot of dependent type theory and issues of

>>>> compile time / runtime argument evaluation. The issues

>>>> that arise are difficult and still being debated in the ty= pe

>>>> theory community.

>>>>

>>>> I am putting the definitions, theorems, and proofs (DTP)>>>> directly into the category/domain hierarchy. Each category=

>>>> will have the DTP specific to it. That way a commutative>>>> domain will inherit a commutative theorem and a

>>>> non-commutative domain will not.

>>>>

>>>> Each domain will have additional DTPs associated with

>>>> the domain (e.g. NNI vs Integer) as well as any DTPs

>>>> it inherits from the category hierarchy. Functions in the<= br> >>>> domain will have associated DTPs.

>>>>

>>>> A function to be proven will then inherit all of the relev= ant

>>>> DTPs. The proof will be attached to the function and

>>>> both will be sent to the hardware (proof-carrying code).>>>>

>>>> The proof checker, running on a field programmable

>>>> gate array (FPGA), will be checked at runtime in

>>>> parallel with the algorithm running on the CPU

>>>> (aka "trust down to the metal"). (Note that Inte= l

>>>> and AMD have built CPU/FPGA combined chips,

>>>> currently only available in the cloud.)

>>>>

>>>>

>>>>

>>>> I am (slowly) making progress on the research.

>>>>

>>>> I have the hardware and nearly have the proof

>>>> checker from LEAN running on my FPGA.

>>>>

>>>> I'm in the process of spreading the DTPs from

>>>> LEAN across the category/domain hierarchy.

>>>>

>>>> The current Axiom build extracts all of the functions

>>>> but does not yet have the DTPs.

>>>>

>>>> I have to restructure the system, including the compiler>>>> and interpreter to parse and inherit the DTPs. I

>>>> have some of that code but only some of the code

>>>> has been pushed to the repository (volume 15) but

>>>> that is rather trivial, out of date, and incomplete.

>>>>

>>>> I'm clearly not smart enough to prove the Risch

>>>> algorithm and its associated machinery but the needed

>>>> definitions and theorems will be available to someone

>>>> who wants to try.

>>>>

>>>> [0] https://github.com/daly/= PDFS/blob/master/bookvolbib.pdf

>>>>

>>>>

>>>> On 8/19/21, Tim Daly <axiomcas@gmail.com> wrote:

>>>>> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D

>>>>>

>>>>> REVIEW (Axiom on WSL2 Windows)

>>>>>

>>>>>

>>>>> So the steps to run Axiom from a Windows desktop

>>>>>

>>>>> 1 Windows) install XMing on Windows for X11 server

>>>>>

>>>>> http://www.straightrunning.com/XmingN= otes/

>>>>>

>>>>> 2 WSL2) Install Axiom in WSL2

>>>>>

>>>>> sudo apt install axiom

>>>>>

>>>>> 3 WSL2) modify /usr/bin/axiom to fix the bug:

>>>>> (someone changed the axiom startup script.

>>>>> It won't work on WSL2. I don't know who or

>>>>> how to get it fixed).

>>>>>

>>>>> sudo emacs /usr/bin/axiom

>>>>>

>>>>> (split the line into 3 and add quote marks)

>>>>>

>>>>> export SPADDEFAULT=3D/usr/local/axiom/mnt/linux

>>>>> export AXIOM=3D/usr/lib/axiom-20170501

>>>>> export "PATH=3D/usr/lib/axiom-20170501/bin:$PATH&= quot;

>>>>>

>>>>> 4 WSL2) create a .axiom.input file to include startup = cmds:

>>>>>

>>>>> emacs .axiom.input

>>>>>

>>>>> )cd "/mnt/c/yourpath"

>>>>> )sys pwd

>>>>>

>>>>> 5 WSL2) create a "myaxiom" command that sets= the

>>>>>=C2=A0 =C2=A0 =C2=A0DISPLAY variable and starts axiom>>>>>

>>>>> emacs myaxiom

>>>>>

>>>>> #! /bin/bash

>>>>> export DISPLAY=3D:0.0

>>>>> axiom

>>>>>

>>>>> 6 WSL2) put it in the /usr/bin directory

>>>>>

>>>>> chmod +x myaxiom

>>>>> sudo cp myaxiom /usr/bin/myaxiom

>>>>>

>>>>> 7 WINDOWS) start the X11 server

>>>>>

>>>>> (XMing XLaunch Icon on your desktop)

>>>>>

>>>>> 8 WINDOWS) run myaxiom from PowerShell

>>>>> (this should start axiom with graphics available)

>>>>>

>>>>> wsl myaxiom

>>>>>

>>>>> 8 WINDOWS) make a PowerShell desktop

>>>>>

>>>>> https://superuser.com/questions/886951/run-powershell-script-when-you= -open-powershell

>>>>>

>>>>> Tim

>>>>>

>>>>> On 8/13/21, Tim Daly <axiomcas@gmail.com> wrote:

>>>>>> A great deal of thought is directed toward making = the SANE version

>>>>>> of Axiom as flexible as possible, decoupling mecha= nism from theory.

>>>>>>

>>>>>> An interesting publication by Brian Cantwell Smith= [0], "Reflection

>>>>>> and Semantics in LISP" seems to contain inter= esting ideas related

>>>>>> to our goal. Of particular interest is the ability= to reason about

>>>>>> and

>>>>>> perform self-referential manipulations. In a depen= dently-typed

>>>>>> system it seems interesting to be able "adapt= " code to handle

>>>>>> run-time computed arguments to dependent functions= . The abstract:

>>>>>>

>>>>>>=C2=A0 =C2=A0 "We show how a computational sys= tem can be constructed to

>>>>>> "reason",

>>>>>> effectively

>>>>>>=C2=A0 =C2=A0 and consequentially, about its own in= ferential processes. The

>>>>>> analysis proceeds in two

>>>>>>=C2=A0 =C2=A0 parts. First, we consider the general= question of computational

>>>>>> semantics, rejecting

>>>>>>=C2=A0 =C2=A0 traditional approaches, and arguing t= hat the declarative and

>>>>>> procedural aspects of

>>>>>>=C2=A0 =C2=A0 computational symbols (what they stan= d for, and what behaviour

>>>>>> they

>>>>>> engender) should be

>>>>>>=C2=A0 =C2=A0 analysed independently, in order that= they may be coherently

>>>>>> related. Second, we

>>>>>>=C2=A0 =C2=A0 investigate self-referential behavior= in computational processes,

>>>>>> and show how to embed an

>>>>>>=C2=A0 =C2=A0 effective procedural model of a compu= tational calculus within that

>>>>>> calculus (a model not

>>>>>>=C2=A0 =C2=A0 unlike a meta-circular interpreter, b= ut connected to the

>>>>>> fundamental operations of the

>>>>>>=C2=A0 =C2=A0 machine in such a way as to provide, = at any point in a

>>>>>> computation,

>>>>>> fully articulated

>>>>>>=C2=A0 =C2=A0 descriptions of the state of that com= putation, for inspection and

>>>>>> possible modification). In

>>>>>>=C2=A0 =C2=A0 terms of the theories that result fro= m these investigations, we

>>>>>> present a general architecture

>>>>>>=C2=A0 =C2=A0 for procedurally reflective processes= , able to shift smoothly

>>>>>> between dealing with a given

>>>>>>=C2=A0 =C2=A0 subject domain, and dealing with thei= r own reasoning processes

>>>>>> over

>>>>>> that domain.

>>>>>>

>>>>>>=C2=A0 =C2=A0 An instance of the general solution i= s worked out in the context

>>>>>> of

>>>>>> an applicative

>>>>>>=C2=A0 =C2=A0 language. Specifically, we present th= ree successive dialects of

>>>>>> LISP: 1-LISP, a distillation of

>>>>>>=C2=A0 =C2=A0 current practice, for comparison purp= oses; 2-LISP, a dialect

>>>>>> constructed in terms of our

>>>>>>=C2=A0 =C2=A0 rationalised semantics, in which the = concept of evaluation is

>>>>>> rejected in favour of

>>>>>>=C2=A0 =C2=A0 independent notions of simplification= and reference, and in which

>>>>>> the respective categories

>>>>>>=C2=A0 =C2=A0 of notation, structure, semantics, an= d behaviour are strictly

>>>>>> aligned; and 3-LISP, an

>>>>>>=C2=A0 =C2=A0 extension of 2-LISP endowed with refl= ective powers."

>>>>>>

>>>>>> Axiom SANE builds dependent types on the fly. The = ability to access

>>>>>> both the refection

>>>>>> of the tower of algebra and the reflection of the = tower of proofs at

>>>>>> the time of construction

>>>>>> makes the construction of a new domain or specific= algorithm easier

>>>>>> and more general.

>>>>>>

>>>>>> This is of particular interest because one of the = efforts is to build

>>>>>> "all the way down to the

>>>>>> metal". If each layer is constructed on top o= f previous proven layers

>>>>>> and the new layer

>>>>>> can "reach below" to lower layers then t= he tower of layers can be

>>>>>> built without duplication.

>>>>>>

>>>>>> Tim

>>>>>>

>>>>>> [0], Smith, Brian Cantwell "Reflection and Se= mantics in LISP"

>>>>>> POPL '84: Proceedings of the 11th ACM SIGACT-S= IGPLAN

>>>>>> ymposium on Principles of programming languagesJan= uary 1

>>>>>> 984 Pages 23=E2=80=9335https://doi.org= /10.1145/800017.800513

>>>>>>

>>>>>> On 6/29/21, Tim Daly <axiomcas@gmail.com> wrote:

>>>>>>> Having spent time playing with hardware it is = perfectly clear that

>>>>>>> future computational mathematics efforts need = to adapt to using

>>>>>>> parallel processing.

>>>>>>>

>>>>>>> I've spent a fair bit of time thinking abo= ut structuring Axiom to

>>>>>>> be parallel. Most past efforts have tried to f= ocus on making a

>>>>>>> particular algorithm parallel, such as a matri= x multiply.

>>>>>>>

>>>>>>> But I think that it might be more effective to= make each domain

>>>>>>> run in parallel. A computation crosses multipl= e domains so a

>>>>>>> particular computation could involve multiple = parallel copies.

>>>>>>>

>>>>>>> For example, computing the Cylindrical Algebra= ic Decomposition

>>>>>>> could recursively decompose the plane. Indeed,= any tree-recursive

>>>>>>> algorithm could be run in parallel "in th= e large" by creating new

>>>>>>> running copies of the domain for each sub-prob= lem.

>>>>>>>

>>>>>>> So the question becomes, how does one manage t= his?

>>>>>>>

>>>>>>> A similar problem occurs in robotics where one= could have multiple

>>>>>>> wheels, arms, propellers, etc. that need to ac= t independently but

>>>>>>> in coordination.

>>>>>>>

>>>>>>> The robot solution uses ROS2. The three ideas = are ROSCORE,

>>>>>>> TOPICS with publish/subscribe, and SERVICES wi= th request/response.

>>>>>>> These are communication paths defined between = processes.

>>>>>>>

>>>>>>> ROS2 has a "roscore" which is basica= lly a phonebook of "topics".

>>>>>>> Any process can create or look up the current = active topics. eq:

>>>>>>>

>>>>>>>=C2=A0 =C2=A0 rosnode list

>>>>>>>

>>>>>>> TOPICS:

>>>>>>>

>>>>>>> Any process can PUBLISH a topic (which is basi= cally a typed data

>>>>>>> structure), e.g the topic /hw with the String = data "Hello World".

>>>>>>> eg:

>>>>>>>

>>>>>>>=C2=A0 =C2=A0 rostopic pub /hw std_msgs/String = "Hello, World"

>>>>>>>

>>>>>>> Any process can SUBSCRIBE to a topic, such as = /hw, and get a

>>>>>>> copy of the data.=C2=A0 eg:

>>>>>>>

>>>>>>>=C2=A0 =C2=A0 rostopic echo /hw=C2=A0 =C2=A0=3D= =3D> "Hello, World"

>>>>>>>

>>>>>>> Publishers talk, subscribers listen.

>>>>>>>

>>>>>>>

>>>>>>> SERVICES:

>>>>>>>

>>>>>>> Any process can make a REQUEST of a SERVICE an= d get a RESPONSE.

>>>>>>> This is basically a remote function call.

>>>>>>>

>>>>>>>

>>>>>>>

>>>>>>> Axiom in parallel?

>>>>>>>

>>>>>>> So domains could run, each in its own process.= It could provide

>>>>>>> services, one for each function. Any other pro= cess could request

>>>>>>> a computation and get the result as a response= . Domains could

>>>>>>> request services from other domains, either wa= iting for responses

>>>>>>> or continuing while the response is being comp= uted.

>>>>>>>

>>>>>>> The output could be sent anywhere, to a termin= al, to a browser,

>>>>>>> to a network, or to another process using the = publish/subscribe

>>>>>>> protocol, potentially all at the same time sin= ce there can be many

>>>>>>> subscribers to a topic.

>>>>>>>

>>>>>>> Available domains could be dynamically added b= y announcing

>>>>>>> themselves as new "topics" and could= be dynamically looked-up

>>>>>>> at runtime.

>>>>>>>

>>>>>>> This structure allows function-level / domain-= level parallelism.

>>>>>>> It is very effective in the robot world and I = think it might be a

>>>>>>> good structuring mechanism to allow computatio= nal mathematics

>>>>>>> to take advantage of multiple processors in a = disciplined fashion.

>>>>>>>

>>>>>>> Axiom has a thousand domains and each could ru= n on its own core.

>>>>>>>

>>>>>>> In addition. notice that each domain is indepe= ndent of the others.

>>>>>>> So if we want to use BLAS Fortran code, it cou= ld just be another

>>>>>>> service node. In fact, any "foreign funct= ion" could transparently

>>>>>>> cooperate in a distributed Axiom.

>>>>>>>

>>>>>>> Another key feature is that proofs can be &quo= t;by node".

>>>>>>>

>>>>>>> Tim

>>>>>>>

>>>>>>>

>>>>>>>

>>>>>>>

>>>>>>> On 6/5/21, Tim Daly <axiomcas@gmail.com> wrote:

>>>>>>>> Axiom is based on first-class dependent ty= pes. Deciding when

>>>>>>>> two types are equivalent may involve compu= tation. See

>>>>>>>> Christiansen, David Thrane "Checking = Dependent Types with

>>>>>>>> Normalization by Evaluation" (2019)>>>>>>>>

>>>>>>>> This puts an interesting constraint on bui= lding types. The

>>>>>>>> constructed types has to export a function= to decide if a

>>>>>>>> given type is "equivalent" to it= self.

>>>>>>>>

>>>>>>>> The notion of "equivalence" migh= t involve category ideas

>>>>>>>> of natural transformation and univalence. = Sigh.

>>>>>>>>

>>>>>>>> That's an interesting design point.

>>>>>>>>

>>>>>>>> Tim

>>>>>>>>

>>>>>>>>

>>>>>>>> On 5/5/21, Tim Daly <axiomcas@gmail.com> wrote:

>>>>>>>>> It is interesting that programmer'= s eyes and expectations adapt

>>>>>>>>> to the tools they use. For instance, I= use emacs and expect to

>>>>>>>>> work directly in files and multiple bu= ffers. When I try to use one

>>>>>>>>> of the many IDE tools I find they tend= to "get in the way". I

>>>>>>>>> already

>>>>>>>>> know or can quickly find whatever they= try to tell me. If you use

>>>>>>>>> an

>>>>>>>>> IDE you probably find emacs "too = sparse" for programming.

>>>>>>>>>

>>>>>>>>> Recently I've been working in a sp= arse programming environment.

>>>>>>>>> I'm exploring the question of runn= ing a proof checker in an FPGA.

>>>>>>>>> The FPGA development tools are painful= at best and not intuitive

>>>>>>>>> since you SEEM to be programming but y= ou're actually describing

>>>>>>>>> hardware gates, connections, and timin= g. This is an environment

>>>>>>>>> where everything happens all-at-once a= nd all-the-time (like the

>>>>>>>>> circuits in your computer). It is the = "assembly language of

>>>>>>>>> circuits".

>>>>>>>>> Naturally, my eyes have adapted to thi= s rather raw level.

>>>>>>>>>

>>>>>>>>> That said, I'm normally doing lite= rate programming all the time.

>>>>>>>>> My typical file is a document which is= a mixture of latex and

>>>>>>>>> lisp.

>>>>>>>>> It is something of a shock to return t= o that world. It is clear

>>>>>>>>> why

>>>>>>>>> people who program in Python find lisp= to be a "sea of parens".

>>>>>>>>> Yet as a lisp programmer, I don't = even see the parens, just code.

>>>>>>>>>

>>>>>>>>> It takes a few minutes in a literate d= ocument to adapt vision to

>>>>>>>>> see the latex / lisp combination as na= tural. The latex markup,

>>>>>>>>> like the lisp parens, eventually just = disappears. What remains

>>>>>>>>> is just lisp and natural language text= .

>>>>>>>>>

>>>>>>>>> This seems painful at first but eyes q= uickly adapt. The upside

>>>>>>>>> is that there is always a "finish= ed" document that describes the

>>>>>>>>> state of the code. The overhead of wri= ting a paragraph to

>>>>>>>>> describe a new function or change a pa= ragraph to describe the

>>>>>>>>> changed function is very small.

>>>>>>>>>

>>>>>>>>> Using a Makefile I latex the document = to generate a current PDF

>>>>>>>>> and then I extract, load, and execute = the code. This loop catches

>>>>>>>>> errors in both the latex and the sourc= e code. Keeping an open file

>>>>>>>>> in

>>>>>>>>> my pdf viewer shows all of the changes= in the document after every

>>>>>>>>> run of make. That way I can edit the b= ook as easily as the code.

>>>>>>>>>

>>>>>>>>> Ultimately I find that writing the boo= k while writing the code is

>>>>>>>>> more productive. I don't have to r= emember why I wrote something

>>>>>>>>> since the explanation is already there= .

>>>>>>>>>

>>>>>>>>> We all have our own way of programming= and our own tools.

>>>>>>>>> But I find literate programming to be = a real advance over IDE

>>>>>>>>> style programming and "raw code&q= uot; programming.

>>>>>>>>>

>>>>>>>>> Tim

>>>>>>>>>

>>>>>>>>>

>>>>>>>>> On 2/27/21, Tim Daly <axiomcas@gmail.com> wrote= :

>>>>>>>>>> The systems I use have the interes= ting property of

>>>>>>>>>> "Living within the compiler&q= uot;.

>>>>>>>>>>

>>>>>>>>>> Lisp, Forth, Emacs, and other syst= ems that present themselves

>>>>>>>>>> through the Read-Eval-Print-Loop (= REPL) allow the

>>>>>>>>>> ability to deeply interact with th= e system, shaping it to your

>>>>>>>>>> need.

>>>>>>>>>>

>>>>>>>>>> My current thread of study is soft= ware architecture. See

>>>>>>>>>> https://www.youtube.com/watch?v=3DW2hagw1VhhI&feature=3Dyoutu.= be

>>>>>>>>>> and https://www.geor= gefairbanks.com/videos/

>>>>>>>>>>

>>>>>>>>>> My current thinking on SANE involv= es the ability to

>>>>>>>>>> dynamically define categories, rep= resentations, and functions

>>>>>>>>>> along with "composition funct= ions" that permits choosing a

>>>>>>>>>> combination at the time of use.

>>>>>>>>>>

>>>>>>>>>> You might want a domain for handli= ng polynomials. There are

>>>>>>>>>> a lot of choices, depending on you= r use case. You might want

>>>>>>>>>> different representations. For exa= mple, you might want dense,

>>>>>>>>>> sparse, recursive, or "machin= e compatible fixnums" (e.g. to

>>>>>>>>>> interface with C code). If these d= on't exist it ought to be

>>>>>>>>>> possible

>>>>>>>>>> to create them. Such "lego-li= ke" building blocks require careful

>>>>>>>>>> thought about creating "fully= factored" objects.

>>>>>>>>>>

>>>>>>>>>> Given that goal, the traditional b= arrier of "compiler" vs

>>>>>>>>>> "interpreter"

>>>>>>>>>> does not seem useful. It is better= to "live within the compiler"

>>>>>>>>>> which

>>>>>>>>>> gives the ability to define new th= ings "on the fly".

>>>>>>>>>>

>>>>>>>>>> Of course, the SANE compiler is go= ing to want an associated

>>>>>>>>>> proof of the functions you create = along with the other parts

>>>>>>>>>> such as its category hierarchy and= representation properties.

>>>>>>>>>>

>>>>>>>>>> There is no such thing as a simple= job. :-)

>>>>>>>>>>

>>>>>>>>>> Tim

>>>>>>>>>>

>>>>>>>>>>

>>>>>>>>>> On 2/18/21, Tim Daly <axiomcas@gmail.com>= wrote:

>>>>>>>>>>> The Axiom SANE compiler / inte= rpreter has a few design points.

>>>>>>>>>>>

>>>>>>>>>>> 1) It needs to mix interpreted= and compiled code in the same

>>>>>>>>>>> function.

>>>>>>>>>>> SANE allows dynamic constructi= on of code as well as dynamic type

>>>>>>>>>>> construction at runtime. Both = of these can occur in a runtime

>>>>>>>>>>> object.

>>>>>>>>>>> So there is potentially a mixt= ure of interpreted and compiled

>>>>>>>>>>> code.

>>>>>>>>>>>

>>>>>>>>>>> 2) It needs to perform type re= solution at compile time without

>>>>>>>>>>> overhead

>>>>>>>>>>> where possible. Since this is = not always possible there needs to

>>>>>>>>>>> be

>>>>>>>>>>> a "prefix thunk" tha= t will perform the resolution. Trivially,

>>>>>>>>>>> for

>>>>>>>>>>> example,

>>>>>>>>>>> if we have a + function we nee= d to type-resolve the arguments.

>>>>>>>>>>>

>>>>>>>>>>> However, if we can prove at co= mpile time that the types are both

>>>>>>>>>>> bounded-NNI and the result is = bounded-NNI (i.e. fixnum in lisp)

>>>>>>>>>>> then we can inline a call to += at runtime. If not, we might have

>>>>>>>>>>> + applied to NNI and POLY(FLOA= T), which requires a thunk to

>>>>>>>>>>> resolve types. The thunk could= even "specialize and compile"

>>>>>>>>>>> the code before executing it.<= br> >>>>>>>>>>>

>>>>>>>>>>> It turns out that the Forth im= plementation of

>>>>>>>>>>> "threaded-interpreted&quo= t;

>>>>>>>>>>> languages model provides an ef= ficient and effective way to do

>>>>>>>>>>> this.[0]

>>>>>>>>>>> Type resolution can be "i= nserted" in intermediate thunks.

>>>>>>>>>>> The model also supports dynami= c overloading and tail recursion.

>>>>>>>>>>>

>>>>>>>>>>> Combining high-level CLOS code= with low-level threading gives an

>>>>>>>>>>> easy to understand and robust = design.

>>>>>>>>>>>

>>>>>>>>>>> Tim

>>>>>>>>>>>

>>>>>>>>>>> [0] Loeliger, R.G. "Threa= ded Interpretive Languages" (1981)

>>>>>>>>>>> ISBN 0-07-038360-X

>>>>>>>>>>>

>>>>>>>>>>>

>>>>>>>>>>>

>>>>>>>>>>>

>>>>>>>>>>> On 2/5/21, Tim Daly <axiomcas@gmail.com>= ; wrote:

>>>>>>>>>>>> I've worked hard to ma= ke Axiom depend on almost no other

>>>>>>>>>>>> tools so that it would not= get caught by "code rot" of

>>>>>>>>>>>> libraries.

>>>>>>>>>>>>

>>>>>>>>>>>> However, I'm also tryi= ng to make the new SANE version much

>>>>>>>>>>>> easier to understand and d= ebug.To that end I've been

>>>>>>>>>>>> experimenting

>>>>>>>>>>>> with some ideas.

>>>>>>>>>>>>

>>>>>>>>>>>> It should be possible to v= iew source code, of course. But the

>>>>>>>>>>>> source

>>>>>>>>>>>> code is not the only, nor = possibly the best, representation of

>>>>>>>>>>>> the

>>>>>>>>>>>> ideas.

>>>>>>>>>>>> In particular, source code= gets compiled into data structures.

>>>>>>>>>>>> In

>>>>>>>>>>>> Axiom

>>>>>>>>>>>> these data structures real= ly are a graph of related structures.

>>>>>>>>>>>>

>>>>>>>>>>>> For example, looking at th= e gcd function from NNI, there is the

>>>>>>>>>>>> representation of the gcd = function itself. But there is also a

>>>>>>>>>>>> structure

>>>>>>>>>>>> that is the REP (and, in t= he new system, is separate from the

>>>>>>>>>>>> domain).

>>>>>>>>>>>>

>>>>>>>>>>>> Further, there are associa= ted specification and proof

>>>>>>>>>>>> structures.

>>>>>>>>>>>> Even

>>>>>>>>>>>> further, the domain inheri= ts the category structures, and from

>>>>>>>>>>>> those

>>>>>>>>>>>> it

>>>>>>>>>>>> inherits logical axioms an= d definitions through the proof

>>>>>>>>>>>> structure.

>>>>>>>>>>>>

>>>>>>>>>>>> Clearly the gcd function i= s a node in a much larger graph

>>>>>>>>>>>> structure.

>>>>>>>>>>>>

>>>>>>>>>>>> When trying to decide why = code won't compile it would be useful

>>>>>>>>>>>> to

>>>>>>>>>>>> be able to see and walk th= ese structures. I've thought about

>>>>>>>>>>>> using

>>>>>>>>>>>> the

>>>>>>>>>>>> browser but browsers are t= oo weak. Either everything has to be

>>>>>>>>>>>> "in

>>>>>>>>>>>> a

>>>>>>>>>>>> single tab to show the gra= ph" or "the nodes of the graph are in

>>>>>>>>>>>> different

>>>>>>>>>>>> tabs". Plus, construc= ting dynamic graphs that change as the

>>>>>>>>>>>> software

>>>>>>>>>>>> changes (e.g. by loading a= new spad file or creating a new

>>>>>>>>>>>> function)

>>>>>>>>>>>> represents the huge proble= m of keeping the browser "in sync

>>>>>>>>>>>> with

>>>>>>>>>>>> the

>>>>>>>>>>>> Axiom workspace". So = something more dynamic and embedded is

>>>>>>>>>>>> needed.

>>>>>>>>>>>>

>>>>>>>>>>>> Axiom source gets compiled= into CLOS data structures. Each of

>>>>>>>>>>>> these

>>>>>>>>>>>> new SANE structures has an= associated surface representation,

>>>>>>>>>>>> so

>>>>>>>>>>>> they

>>>>>>>>>>>> can be presented in user-f= riendly form.

>>>>>>>>>>>>

>>>>>>>>>>>> Also, since Axiom is liter= ate software, it should be possible

>>>>>>>>>>>> to

>>>>>>>>>>>> look

>>>>>>>>>>>> at

>>>>>>>>>>>> the code in its literate f= orm with the surrounding explanation.

>>>>>>>>>>>>

>>>>>>>>>>>> Essentially we'd like = to have the ability to "deep dive" into

>>>>>>>>>>>> the

>>>>>>>>>>>> Axiom

>>>>>>>>>>>> workspace, not only for de= bugging, but also for understanding

>>>>>>>>>>>> what

>>>>>>>>>>>> functions are used, where = they come from, what they inherit,

>>>>>>>>>>>> and

>>>>>>>>>>>> how they are used in a com= putation.

>>>>>>>>>>>>

>>>>>>>>>>>> To that end I'm lookin= g at using McClim, a lisp windowing

>>>>>>>>>>>> system.

>>>>>>>>>>>> Since the McClim windows w= ould be part of the lisp image, they

>>>>>>>>>>>> have

>>>>>>>>>>>> access to display (and mod= ify) the Axiom workspace at all

>>>>>>>>>>>> times.

>>>>>>>>>>>>

>>>>>>>>>>>> The only hesitation is tha= t McClim uses quicklisp and drags in

>>>>>>>>>>>> a

>>>>>>>>>>>> lot

>>>>>>>>>>>> of other subsystems. It= 9;s all lisp, of course.

>>>>>>>>>>>>

>>>>>>>>>>>> These ideas aren't new= . They were available on Symbolics

>>>>>>>>>>>> machines,

>>>>>>>>>>>> a truly productive platfor= m and one I sorely miss.

>>>>>>>>>>>>

>>>>>>>>>>>> Tim

>>>>>>>>>>>>

>>>>>>>>>>>>

>>>>>>>>>>>>

>>>>>>>>>>>> On 1/19/21, Tim Daly <<= a href=3D"mailto:axiomcas@gmail.com" target=3D"_blank">axiomcas@gmail.com> wrote:

>>>>>>>>>>>>> Also of interest is th= e talk

>>>>>>>>>>>>> "The Unreasonable= Effectiveness of Dynamic Typing for

>>>>>>>>>>>>> Practical

>>>>>>>>>>>>> Programs"

>>>>>>>>>>>>> https://vimeo.com/743= 54480

>>>>>>>>>>>>> which questions whethe= r static typing really has any benefit.

>>>>>>>>>>>>>

>>>>>>>>>>>>> Tim

>>>>>>>>>>>>>

>>>>>>>>>>>>>

>>>>>>>>>>>>> On 1/19/21, Tim Daly &= lt;axiomcas@gmail.c= om> wrote:

>>>>>>>>>>>>>> Peter Naur wrote a= n article of interest:

>>>>>>>>>>>>>> htt= p://pages.cs.wisc.edu/~remzi/Naur.pdf

>>>>>>>>>>>>>>

>>>>>>>>>>>>>> In particular, it = mirrors my notion that Axiom needs

>>>>>>>>>>>>>> to embrace literat= e programming so that the "theory

>>>>>>>>>>>>>> of the problem&quo= t; is presented as well as the "theory

>>>>>>>>>>>>>> of the solution&qu= ot;. I quote the introduction:

>>>>>>>>>>>>>>

>>>>>>>>>>>>>>

>>>>>>>>>>>>>>

>>>>>>>>>>>>>> This article is, t= o my mind, the most accurate account

>>>>>>>>>>>>>> of what goes on in= designing and coding a program.

>>>>>>>>>>>>>> I refer to it regu= larly when discussing how much

>>>>>>>>>>>>>> documentation to c= reate, how to pass along tacit

>>>>>>>>>>>>>> knowledge, and the= value of the XP's metaphor-setting

>>>>>>>>>>>>>> exercise. It also = provides a way to examine a methodolgy's

>>>>>>>>>>>>>> economic structure= .

>>>>>>>>>>>>>>

>>>>>>>>>>>>>> In the article, wh= ich follows, note that the quality of the

>>>>>>>>>>>>>> designing programm= er's work is related to the quality of

>>>>>>>>>>>>>> the match between = his theory of the problem and his theory

>>>>>>>>>>>>>> of the solution. N= ote that the quality of a later

>>>>>>>>>>>>>> programmer's>>>>>>>>>>>>>> work is related to= the match between his theories and the

>>>>>>>>>>>>>> previous programme= r's theories.

>>>>>>>>>>>>>>

>>>>>>>>>>>>>> Using Naur's i= deas, the designer's job is not to pass along

>>>>>>>>>>>>>> "the design&q= uot; but to pass along "the theories" driving the

>>>>>>>>>>>>>> design.

>>>>>>>>>>>>>> The latter goal is= more useful and more appropriate. It also

>>>>>>>>>>>>>> highlights that kn= owledge of the theory is tacit in the

>>>>>>>>>>>>>> owning,

>>>>>>>>>>>>>> and

>>>>>>>>>>>>>> so passing along t= he thoery requires passing along both

>>>>>>>>>>>>>> explicit

>>>>>>>>>>>>>> and tacit knowledg= e.

>>>>>>>>>>>>>>

>>>>>>>>>>>>>> Tim

>>>>>>>>>>>>>>

>>>>>>>>>>>>>

>>>>>>>>>>>>

>>>>>>>>>>>

>>>>>>>>>>

>>>>>>>>>

>>>>>>>>

>>>>>>>

>>>>>>

>>>>>

>>>>

>>>

>>

>

Axiom has an awkward 'attributes' category st=
ructure.

In the SANE version it is clear that thes=
e attributes are much

closer to logic 'definitions'. As a=
result one of the changes

is to create a new 'category'-=
type structure for definitions.

There will be a new keyword, like=
the category keyword,

'definition'.

Tim

On Fri, Mar 11, 2022 at 9:46 AM Tim Daly <=
axiomcas@gmail.com> wrote:

=

=

The github lockout continues... I'm sp= ending some time adding examples to source code.A= ny function can have ++X comments added. These willappear as exa= mples when the function is )display For example,in PermutationGr= oup there is a function 'strongGenerators'defined as:=C2=A0 strongGenerators : % -> L PERM S=C2=A0=C2=A0=C2=A0 ++ strongGenerators(gp) returns strong generators for=C2=A0=C2=A0=C2=A0 ++ the group gp.=C2=A0=C2=A0=C2=A0 += +=C2=A0=C2=A0=C2=A0 ++X S:List(Integer) :=3D [1,2,3,4]=C2=A0=C2=A0=C2=A0 ++X G :=3D symmetricGroup(S)=C2=A0=C2=A0=C2= =A0 ++X strongGenerators(G)Later, in the interpreter we see:)d op strongGenerators

<= /div>=C2=A0 There is one exposed function called strongGenerators :=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 [1] PermutationGroup(D2) -> List(= Permutation(D2)) from=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 PermutationGroup(D2)= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0 if D2 has SETCAT=C2=A0 Exam= ples of strongGenerators from PermutationGroup=C2=A0<= div>=C2=A0 S:List(Integer) :=3D [1,2,3,4]=C2=A0 G :=3D symmetric= Group(S)=C2=A0 strongGenerators(G)

=This will show a working example f= or functions that theuser can copy and use. It is especially use= ful to show howto construct working arguments.These "example" functions are run at build time whenthe make command looks like=C2=A0=C2=A0=C2=A0 make TESTSET= =3DalltestsI hope to add this documentation t= o all Axiom functions.In addition, the plan is to= add these function calls to theusual test documentation. That m= eans that all of these exampleswill be run and show their output= in the final distribution(mnt/ubuntu/doc/src/input/*.dvi files)= so the user can viewthe expected output.TimOn Fri, Feb 25= , 2022 at 6:05 PM Tim Daly <axiomcas@gmail.com> wrote:It turns out that creat= ing SPAD-looking output is trivialin Common Lisp. Each clas= s can have a custom printroutine so signatures and ++ comments c= an each beprinted with their own format.To ensure that I maintain compatibility I'll be printing th= e categories and domains so they look like SPAD code,at least un= til I get the proof technology integrated. I willprobably specia= lize the proof printers to look like theoriginal LEAN proof synt= ax.Internally, however, it will all be Common Lis= p.Common Lisp makes so many desirable features so= easy.It is possible to trace dynamically at any level. One coul= deven write a trace that showed how Axiom arrived at thesolution. Any domain could have special case output syntax wit= hout affecting any other domain so one could write atree-like ou= tput for proofs. Using greek characters is trivialso the input a= nd output notation is more mathematical.TimOn Thu, Feb 24, 2022 at 10:24 AM Tim Daly <axiomcas@gmail.com> wr= ote:Axiom's SPAD code compiles to Common Lisp.The AKC= L version of Common Lisp compiles to C.Three languages and 2 com= pilers is a lot to maintain.Further, there are very few people a= ble to write SPADand even fewer people able to maintain it.

<= /div>I've decided that the SANE version of Axiom wi= ll beimplemented in pure Common Lisp. I've outlined Axi= om'scategory / type hierarchy in the Common Lisp Object<= /div>System (CLOS). I am now experimenting with re-writingt= he functions into Common Lisp.This will have = several long-term effects. It simplifiesthe implementation issue= s. SPAD code blocks a lot ofactions and optimizations that Commo= n Lisp provides.The Common Lisp language has many more peoplewho can read, modify, and maintain code. It provides for<= div>interoperability with other Common Lisp projects withno effo= rt. Common Lisp is an international standardwhich ensures that t= he code will continue to run.The input / outp= ut mathematics will remain the same.Indeed, with the new general= izations for first-classdependent types it will be more general.=This is a big change, similar to eliminating BOOT= codeand moving to Literate Programming. This will provide abetter platform for future research work. Current researchis focused on merging Axiom's computer algebra mathematicsw= ith Lean's proof language. The goal is to create a system for=C2=A0"computational mathematics".= Research is the whole point of Axiom.Tim On Sat, Jan 22, 2022 at 9:16 PM Tim Daly <axiomcas@gmail.com= > wrote:I can't stress enough how important it is to listen t= o Hamming's talkAxiom will begin to die the day I stop wo= rking on it.However, proving Axiom correct "= down to the metal", is fundamental.It will merge computer a= lgebra and logic, spawning years of newresearch.

<= /div>Work on fundamental problems.Tim<= div>On Thu, Dec 30, 2021 at 6:46 PM Tim Daly <axiomcas@gmail.com> wrote:=One of the interesting questions when obtaining a resulti= s "what functions were called and what was their return value?"Otherwise known as the "show your work" idea.=There is an idea called the "writer monad" [0], us= uallyimplemented to facilitate logging. We can exploit this=idea to provide "show your work" capability. Each func= tioncan provide this information inside the monad enabling thequestion to be answered at any time.For = those unfamiliar with the monad idea, the best explanationI'= ve found is this video [1].Tim

=[0] Deriving the writer monad from first principleshttps://williamyaoh.com/posts/2020-07-26-deriving-= writer-monad.html [1] The Absolute Best Intro = to Monads for Software EngineersOn Mon, Dec 13, 2021 at 12:30 AM Tim Daly <axiomcas@gmail.com&g= t; wrote:...(snip)...Common Lisp has a= n "open compiler". That allows the abilityto deeply mo= dify compiler behavior using compiler macrosand macros in genera= l. CLOS takes advantage of this to addtyped behavior into the co= mpiler in a way that ALLOWS stricttyping such as found in constr= uctive type theory and ML.Judgments, ala Crary, are front-and-ce= nter.Whether you USE the discipline afforded = is the real question.Indeed, the Axiom research s= truggle is essentially one of howto have a disciplined use of fi= rst-class dependent types. Thestruggle raises issues of, for exa= mple, compiling a dependenttype whose argument is recursive in t= he compiled type. Sincethe new type is first-class it can be con= structed at what youimproperly call "run-time". Howeve= r, it appears that the recursivetype may have to call the compil= er at each recursion to generatethe next step since in some case= s it cannot generate "closed code".= I am embedding proofs (in LEAN language) into the typehierarchy = so that theorems, which depend on the type hierarchy,are cor= rectly inherited. The compiler has to check the proofs of functionsat compile time using these. Hacking up nonsense just won't cut it. = Think of the problem of embedding LEAN proofs in ML or ML in LEAN= .(Actually, Jeremy Avigad might find that research interesting.)=So Matrix(3,3,Float) has inverses (assuming Float= is afield (cough)). The type inherits this theorem and proofs o= ffunctions can use this. But Matrix(3,4,Integer) does not haveinverses so the proofs cannot use this. The type hierarchy hasto ensure that the proper theorems get inherited.

=Making proof technology work at compile time is hard.= (Worse yet, LEAN is a moving target. Sigh.)On = Thu, Nov 25, 2021 at 9:43 AM Tim Daly <axiomcas@gmail.com> wrote: =As you know I've been re-architect= ing Axiom to use first classdependent types and proving the algo= rithms correct. For example,the GCD of natural numbers or the GC= D of polynomials.The idea involves "boxing u= p" the proof with the algorithm (akaproof carrying code) in= the ELF file (under a crypto hash so itcan't be changed).Once the code is running on the CPU, the proof is r= un in parallelon the field programmable gate array (FPGA). Intel= data centerservers have CPUs with built-in FPGAs these days.There is a bit of a disconnect, though. The GCD code= is compiledmachine code but the proof is LEAN-level.=What would be ideal is if the compiler not only compiled the= GCDcode to machine code, it also compiled the proof to "ma= chine code".That is, for each machine instruction, the FPGA= proof checkerwould ensure that the proof was not violated at th= e individualinstruction level.What= does it mean to "compile a proof to the machine code level"?The Milawa effort (Myre14.pdf) does incremental proof= s in layers.To quote from the article [0]:=C2=A0=C2=A0 We begin with a simple proof checker, call it A, which= is short=C2=A0=C2=A0 enough to verify by the ``social process&#= 39;' of mathematics -- and=C2=A0 more recently with a theore= m prover for a more expressive logic.=C2=A0=C2=A0= We then develop a series of increasingly powerful proof checkers,=C2=A0 call the B, C, D, and so on. We show each of these programs only =C2=A0=C2=A0 accepts the same formulas as A, using A to verify B, = and B to verify=C2=A0=C2=A0 C, and so on. Then, since we trust A= , and A says B is trustworthy, we=C2=A0=C2=A0 can trust B. Then,= since we trust B, and B says C is trustworthy, we=C2=A0=C2=A0 c= an trust C.This gives a technique for "= compiling the proof" down the the machinecode level. Ideall= y, the compiler would have judgments for each step ofthe compila= tion so that each compile step has a justification. I don'tk= now of any compiler that does this yet. (References welcome).At the machine code level, there are techniques that w= ould allowthe FPGA proof to "step in sequence" with th= e executing code.Some work has been done on using "Hoar= e Logic for RealisticallyModelled Machine Code" (paper atta= ched, Myre07a.pdf),"Decompilation into Logic -- Improved (M= yre12a.pdf).So the game is to construct a GCD ove= r some type (Nats, Polys, etc.Axiom has 22), compile the depende= nt type GCD to machine code.In parallel, the proof of the code i= s compiled to machine code. Thepair is sent to the CPU/FPGA and,= while the algorithm runs, the FPGAensures the proof is not viol= ated, instruction by instruction.(I'm ignorin= g machine architecture issues such pipelining, out-of-order,bran= ch prediction, and other machine-level things to ponder. I'm lookingat the RISC-V Verilog details by various people to understand bette= r butit is still a "misty fog" for me.)=The result is proven code "down to the metal".TimOn Thu, Nov 25, 2021 at 6:05 AM Tim Daly &l= t;axiomcas@gmail.co= m> wrote:As you know I've been re-ar= chitecting Axiom to use first classdependent types and proving t= he algorithms correct. For example,the GCD of natural numbers or= the GCD of polynomials.The idea involves "b= oxing up" the proof with the algorithm (akaproof carrying c= ode) in the ELF file (under a crypto hash so itcan't be chan= ged).Once the code is running on the CPU, the pro= of is run in parallelon the field programmable gate array (FPGA)= . Intel data centerservers have CPUs with built-in FPGAs these d= ays.There is a bit of a disconnect, though. The G= CD code is compiledmachine code but the proof is LEAN-level.What would be ideal is if the compiler not only compi= led the GCDcode to machine code, it also compiled the proof to &= quot;machine code".That is, for each machine instruction, t= he FPGA proof checkerwould ensure that the proof was not violate= d at the individualinstruction level.What does it mean to "compile a proof to the machine code level&quo= t;? The Milawa effort (Myre14.pdf) does incrementa= l proofs in layers.To quote from the article [0]:==C2=A0=C2=A0 We begin with a simple proof checker, call it A= , which is short=C2=A0=C2=A0 enough to verify by the ``social pr= ocess'' of mathematics -- and=C2=A0 more recently with a= theorem prover for a more expressive logic.=C2= =A0=C2=A0 We then develop a series of increasingly powerful proof checkers,==C2=A0 call the B, C, D, and so on. We show each of these progra= ms only=C2=A0=C2=A0 accepts the same formulas as A, using A to v= erify B, and B to verify=C2=A0=C2=A0 C, and so on. Then, since w= e trust A, and A says B is trustworthy, we=C2=A0=C2=A0 can trust= B. Then, since we trust B, and B says C is trustworthy, we=C2= =A0=C2=A0 can trust C.This gives a technique= for "compiling the proof" down the the machinecode le= vel. Ideally, the compiler would have judgments for each step of= the compilation so that each compile step has a justification. I don't<= /div>know of any compiler that does this yet. (References welcome).At the machine code level, there are techni= ques that would allowthe FPGA proof to "step in sequence&qu= ot; with the executing code.Some work has been done on using= "Hoare Logic for RealisticallyModelled Machine Code" = (paper attached, Myre07a.pdf),"Decompilation into Logic -- = Improved (Myre12a.pdf).So the game is to construc= t a GCD over some type (Nats, Polys, etc.Axiom has 22), compile = the dependent type GCD to machine code.In parallel, the proof of= the code is compiled to machine code. Thepair is sent to the CP= U/FPGA and, while the algorithm runs, the FPGAensures the proof = is not violated, instruction by instruction.(I= 9;m ignoring machine architecture issues such pipelining, out-of-order,branch prediction, and other machine-level things to ponder. I'm= lookingat the RISC-V Verilog details by various people to under= stand better butit is still a "misty fog" for me.)

=The result is proven code "down to the metal= ".Tim <= div dir=3D"ltr" class=3D"gmail_attr">On Sat, Nov 13, 2021 at 5:28 PM Tim Da= ly <axiomcas@gma= il.com> wrote:Full support for general, first-class dependent= types requiressome changes to the Axiom design. That implies so= me languagedesign questions.Given that= mathematics is such a general subject with a lot of"local&= quot; notation and ideas (witness logical judgment notation)care= ful thought is needed to design a language that is able tohandle= a wide range.Normally language design is a two-l= evel process. The languagedesigner creates a language and then a= n implementation. Variousdesign choices affect the final languag= e.There is "The Metaobject Protocol"= ; (MOP)which= encourages a three-level process. The language designerwor= ks at a Metalevel to design a family of languages, then thelangu= age specializations, then the implementation. A MOP designallows= the language user to optimize the language to their problem.A simple paper on the subject is "Metaobject Protocols&quo= t;TimOn Mon, Oct 25, 2021 at 7:42 PM Tim Dal= y <axiomcas@gmai= l.com> wrote: I have a separate thread of research on Self-Rep= licating Systems(ref: Kinematics of Self Reproducing Machineswhich led to watching "Strange Dreams of Stranger Loops" = by Will Byrdhttps://www.youtube.com/watch?v=3DAffW-7ika0EWill referenced a PhD Thesis by Jon Doyle"A Model for Deliberation, Action, and Introspection"=I also read the thesis by J.C.G. Sturdy"A Li= sp through the Looking Glass"Self-replicatio= n requires the ability to manipulate your ownrepresentation in s= uch a way that changes to that representationwill change behavio= r.This leads to two thoughts in the SANE research= .First, "Declarative Representation". T= hat is, most of the thingsabout the representation should be dec= larative rather thanprocedural. Applying this idea as much as po= ssible makes iteasier to understand and manipulate. Second, "Explicit Call Stack". Function calls fo= rm an implicitcall stack. This can usually be displayed in a run= ning lisp system.However, having the call stack explicitly avail= able would meanthat a system could "introspect" at the= first-class level.These two ideas would make it = easy, for example, to let thesystem "show the work". O= ne of the normal complaints is thata system presents an answer b= ut there is no way to know howthat answer was derived. These two= ideas make it possible tounderstand, display, and even post-ans= wer manipulatethe intermediate steps.H= aving the intermediate steps also allows proofs to beinserted in= a step-by-step fashion. This aids the effort tohave proofs run = in parallel with computation at the hardwarelevel.Tim=

On Thu, Oct 21, 2021 at 9:50 AM Tim Daly <axiomcas@gmail.com&= gt; wrote:So the current struggle involves the categories in Axiom.<= /div>The categories and domains constructed using categ= oriesare dependent types. When are dependent types "equal&q= uot;?Well, hummmm, that depends on the arguments to theconstructor. But in order to decide(?) equality = we have to evaluatethe arguments (which themselves can be depend= ent types).Indeed, we may, and in general, we must evaluate the =arguments at compile time (well, "construction time&quo= t; asthere isn't really a compiler / interpreter separation = anymore.)That raises the question of what &qu= ot;equality" means. Thisis not simply a "set equality&= quot; relation. It falls into theinfinite-groupoid of homotopy t= ype theory. In generalit appears that deciding category / domain= equivalencemight force us to climb the type hierarchy. Beyond that, there is the question of "which proof&qu= ot;applies to the resulting object. Proofs depend on their=assumptions which might be different for differentconstruct= ions. As yet I have no clue how to "index"proofs based= on their assumptions, nor how toconnect these assumptions = to the groupoid structure.My brain hurts. TimOn Mon, Oct 18, 2021 at 2:00 AM T= im Daly <axiomca= s@gmail.com> wrote:"Birthing Computational Mathematics&q= uot;The Axiom SANE project is difficult at a very= fundamentallevel. The title "SANE" was chosen due to = the variouswords found in a thesuarus... "rational", &= quot;coherent","judicious" and "sound".=These are very high level, amorphous ideas. But s= o isthe design of SANE. Breaking away from tradition incomputer algebra, type theory, and proof assistants is very dif= ficult. Ideas tend to fall into standard jargonwhich limits both= the frame of thinking (e.g. dependenttypes) and the content (e.= g. notation).Questioning both frame and content i= s very difficult.It is hard to even recognize when they are acce= pted"by default" rather than "by choice". Wh= at does the idea"power tools" mean in a primitive,= hand labor culture?Christopher Alexander [0]= addresses this problem ina lot of his writing. Specifically, in= his book "Notes onthe Synthesis of Form", in his chap= ter 5 "The SelfconsiousProcess", he addresses this pro= blem directly. This is a"must read" book. Unlike building design and contruction, however, thereare almost no constraints to use as guides. Alexanderquot= es Plato's Phaedrus:=C2=A0 "First, the t= aking in of scattered particulars under=C2=A0=C2=A0 one Idea, so= that everyone understands what is being=C2=A0=C2=A0 talked abou= t ... Second, the separation of the Idea=C2=A0=C2=A0 into parts,= by dividing it at the joints, as nature=C2=A0=C2=A0 directs, no= t breaking any limb in half as a bad=C2=A0=C2=A0 carver mig= ht."Lisp, which has been called "cl= ay for the mind" canbuild virtually anything that can be th= ought. The"joints" are also "of one's ch= oosing" so one isboth carver and "nature".Clearly the problem is no longer "the tools&quo= t;.*I* am the problem constraining the solution.Birth= ing this "new thing" is slow, difficult, anduncertain = at best.Tim[0] Alexande= r, Christopher "Notes on the Synthesisof Form" Harvard= University Press 1964ISBN 0-674-62751-2On Sun, Oct 10, 2021 at 4:40 PM Tim Daly <axiomcas@gmail.com> wrote:Re: writing a paper... I'= m not connected to Academia

so anything I'd write would never make it into print.

"Language level parsing" is still a long way off. The talk

by Guy Steele [2] highlights some of the problems we

currently face using mathematical metanotation.

For example, a professor I know at CCNY (City College

of New York) didn't understand Platzer's "funny

fraction notation" (proof judgements) despite being

an expert in Platzer's differential equations area.

Notation matters and is not widely common.

I spoke to Professor Black (in LTI) about using natural

language in the limited task of a human-robot cooperation

in changing a car tire.=C2=A0 I looked at the current machine

learning efforts. They are no where near anything but

toy systems, taking too long to train and are too fragile.

Instead I ended up using a combination of AIML [3]

(Artificial Intelligence Markup Language), the ALICE

Chatbot [4], Forgy's OPS5 rule based program [5],

and Fahlman's SCONE [6] knowledge base. It was

much less fragile in my limited domain problem.

I have no idea how to extend any system to deal with

even undergraduate mathematics parsing.

Nor do I have any idea how I would embed LEAN

knowledge into a SCONE database, although I

think the combination would be useful and interesting.

I do believe that, in the limited area of computational

mathematics, we are capable of building robust, proven

systems that are quite general and extensible. As you

might have guessed I've given it a lot of thought over

the years :-)

A mathematical language seems to need >6 components

1) We need some sort of a specification language, possibly

somewhat 'propositional' that introduces the assumptions

you mentioned (ref. your discussion of numbers being

abstract and ref. your discussion of relevant choice of

assumptions related to a problem).

This is starting to show up in the hardware area (e.g.

Lamport's TLC[0])

Of course, specifications relate to proving programs

and, as you recall, I got a cold reception from the

LEAN community about using LEAN for program proofs.

2) We need "scaffolding". That is, we need a theory

that can be reduced to some implementable form

that provides concept-level structure.

Axiom uses group theory for this. Axiom's "category"

structure has "Category" things like Ring. Claiming

to be a Ring brings in a lot of "Signatures" of functions

you have to implement to properly be a Ring.

Scaffolding provides a firm mathematical basis for

design. It provides a link between the concept of a

Ring and the expectations you can assume when

you claim your "Domain" "is a Ring". Category

theory might provide similar structural scaffolding

(eventually... I'm still working on that thought garden)

LEAN ought to have a textbook(s?) that structures

the world around some form of mathematics. It isn't

sufficient to say "undergraduate math" is the goal.

There needs to be some coherent organization so

people can bring ideas like Group Theory to the

organization. Which brings me to ...

3) We need "spreading". That is, we need to take

the various definitions and theorems in LEAN and

place them in their proper place in the scaffold.

For example, the Ring category needs the definitions

and theorems for a Ring included in the code for the

Ring category. Similarly, the Commutative category

needs the definitions and theorems that underlie

"commutative" included in the code.

That way, when you claim to be a "Commutative Ring"

you get both sets of definitions and theorems. That is,

the inheritance mechanism will collect up all of the

definitions and theorems and make them available

for proofs.

I am looking at LEAN's definitions and theorems with

an eye to "spreading" them into the group scaffold of

Axiom.

4) We need "carriers" (Axiom calls them representations,

aka "REP"). REPs allow data structures to be defined

independent of the implementation.

For example, Axiom can construct Polynomials that

have their coefficients in various forms of representation.

You can define "dense" (all coefficients in a list),

"sparse" (only non-zero coefficients), "recursive", etc= .

A "dense polynomial" and a "sparse polynomial" work

exactly the same way as far as the user is concerned.

They both implement the same set of functions. There

is only a difference of representation for efficiency and

this only affects the implementation of the functions,

not their use.

Axiom "got this wrong" because it didn't sufficiently

separate the REP from the "Domain". I plan to fix this.

LEAN ought to have a "data structures" subtree that

has all of the definitions and axioms for all of the

existing data structures (e.g. Red-Black trees). This

would be a good undergraduate project.

5) We need "Domains" (in Axiom speak). That is, we

need a box that holds all of the functions that implement

a "Domain". For example, a "Polynomial Domain" would

hold all of the functions for manipulating polynomials

(e.g polynomial multiplication). The "Domain" box

is a dependent type that:

=C2=A0 A) has an argument list of "Categories" that this "Do= main"

=C2=A0 =C2=A0 =C2=A0 box inherits. Thus, the "Integer Domain" inh= erits

=C2=A0 =C2=A0 =C2=A0 the definitions and axioms from "Commutative"= ;

=C2=A0 =C2=A0 =C2=A0Functions in the "Domain" box can now assume<= br> =C2=A0 =C2=A0 =C2=A0and use the properties of being commutative. Proofs

=C2=A0 =C2=A0 =C2=A0of functions in this domain can use the definitions

=C2=A0 =C2=A0 =C2=A0and proofs about being commutative.

=C2=A0 B) contains an argument that specifies the "REP"

=C2=A0 =C2=A0 =C2=A0 =C2=A0(aka, the carrier). That way you get all of the<= br> =C2=A0 =C2=A0 =C2=A0 =C2=A0functions associated with the data structure

=C2=A0 =C2=A0 =C2=A0 available for use in the implementation.

=C2=A0 =C2=A0 =C2=A0 Functions in the Domain box can use all of

=C2=A0 =C2=A0 =C2=A0 the definitions and axioms about the representation

=C2=A0 =C2=A0 =C2=A0 (e.g. NonNegativeIntegers are always positive)

=C2=A0 C) contains local "spread" definitions and axioms

=C2=A0 =C2=A0 =C2=A0 =C2=A0that can be used in function proofs.

=C2=A0 =C2=A0 =C2=A0 For example, a "Square Matrix" domain would<= br> =C2=A0 =C2=A0 =C2=A0 have local axioms that state that the matrix is

=C2=A0 =C2=A0 =C2=A0 always square. Thus, functions in that box could

=C2=A0 =C2=A0 =C2=A0 use these additional definitions and axioms in

=C2=A0 =C2=A0 =C2=A0 function proofs.

=C2=A0 D) contains local state. A "Square Matrix" domain

=C2=A0 =C2=A0 =C2=A0 =C2=A0would be constructed as a dependent type that

=C2=A0 =C2=A0 =C2=A0 =C2=A0specified the size of the square (e.g. a 2x2

=C2=A0 =C2=A0 =C2=A0 =C2=A0matrix would have '2' as a dependent par= ameter.

=C2=A0 E) contains implementations of inherited functions.

=C2=A0 =C2=A0 =C2=A0 =C2=A0A "Category" could have a signature fo= r a GCD

=C2=A0 =C2=A0 =C2=A0 =C2=A0function and the "Category" could have= a default

=C2=A0 =C2=A0 =C2=A0 =C2=A0implementation. However, the "Domain" = could

=C2=A0 =C2=A0 =C2=A0 =C2=A0have a locally more efficient implementation whi= ch

=C2=A0 =C2=A0 =C2=A0 =C2=A0overrides the inherited implementation.

=C2=A0 =C2=A0 =C2=A0 Axiom has about 20 GCD implementations that

=C2=A0 =C2=A0 =C2=A0 differ locally from the default in the category. They<= br> =C2=A0 =C2=A0 =C2=A0 use properties known locally to be more efficient.

=C2=A0 F) contains local function signatures.

=C2=A0 =C2=A0 =C2=A0 A "Domain" gives the user more and more uniq= ue

=C2=A0 =C2=A0 =C2=A0 functions. The signature have associated

=C2=A0 =C2=A0 =C2=A0 "pre- and post- conditions" that can be used=

=C2=A0 =C2=A0 =C2=A0 as assumptions in the function proofs.

=C2=A0 =C2=A0 =C2=A0 Some of the user-available functions are only

=C2=A0 =C2=A0 =C2=A0 visible if the dependent type would allow them

=C2=A0 =C2=A0 =C2=A0 to exist. For example, a general Matrix domain

=C2=A0 =C2=A0 =C2=A0 would have fewer user functions that a Square

=C2=A0 =C2=A0 =C2=A0 Matrix domain.

=C2=A0 =C2=A0 =C2=A0 In addition, local "helper" functions need t= heir

=C2=A0 =C2=A0 =C2=A0 own signatures that are not user visible.

=C2=A0 G) the function implementation for each signature.

=C2=A0 =C2=A0 =C2=A0 =C2=A0This is obviously where all the magic happens

=C2=A0 H) the proof of each function.

=C2=A0 =C2=A0 =C2=A0 =C2=A0This is where I'm using LEAN.

=C2=A0 =C2=A0 =C2=A0 =C2=A0Every function has a proof. That proof can use=C2=A0 =C2=A0 =C2=A0 =C2=A0all of the definitions and axioms inherited from=

=C2=A0 =C2=A0 =C2=A0 =C2=A0the "Category", "Representation&q= uot;, the "Domain

=C2=A0 =C2=A0 =C2=A0 =C2=A0Local", and the signature pre- and post-

=C2=A0 =C2=A0 =C2=A0 =C2=A0conditions.

=C2=A0 =C2=A0I) literature links. Algorithms must contain a link

=C2=A0 =C2=A0 =C2=A0 to at least one literature reference. Of course,

=C2=A0 =C2=A0 =C2=A0 since everything I do is a Literate Program

=C2=A0 =C2=A0 =C2=A0 this is obviously required. Knuth said so :-)

LEAN ought to have "books" or "pamphlets" that

bring together all of this information for a domain

such as Square Matrices. That way a user can

find all of the related ideas, available functions,

and their corresponding proofs in one place.

6) User level presentation.

=C2=A0 =C2=A0 This is where the systems can differ significantly.

=C2=A0 =C2=A0 Axiom and LEAN both have GCD but they use

=C2=A0 =C2=A0 that for different purposes.

=C2=A0 =C2=A0 I'm trying to connect LEAN's GCD and Axiom's GCD<= br> =C2=A0 =C2=A0 so there is a "computational mathematics" idea that=

=C2=A0 =C2=A0 allows the user to connect proofs and implementations.

7) Trust

Unlike everything else, computational mathematics

can have proven code that gives various guarantees.

I have been working on this aspect for a while.

I refer to it as trust "down to the metal" The idea is

that a proof of the GCD function and the implementation

of the GCD function get packaged into the ELF format.

(proof carrying code). When the GCD algorithm executes

on the CPU, the GCD proof is run through the LEAN

proof checker on an FPGA in parallel.

(I just recently got a PYNQ Xilinx board [1] with a CPU

and FPGA together. I'm trying to implement the LEAN

proof checker on the FPGA).

We are on the cusp of a revolution in computational

mathematics. But the two pillars (proof and computer

algebra) need to get know each other.

Tim

[0] Lamport, Leslie "Chapter on TLA+"

in "Software Specification Methods"

https://www.springer.com/gp/book/9781852333539

(I no longer have CMU library access or I'd send you

the book PDF)

[1] https://www.tul.com.tw/productspynq-z2.html

[2] https://www.youtube.com/watch?v=3DdCuZkaaou0Q

[3] "ARTIFICIAL INTELLIGENCE MARKUP LANGUAGE"

https://arxiv.org/pdf/1307.3091.pdf

[4] ALICE Chatbot

http://www.scielo.org.mx/pdf/cys= /v19n4/1405-5546-cys-19-04-00625.pdf

[5] OPS5 User Manual

https://kilthub.cm= u.edu/articles/journal_contribution/OPS5_user_s_manual/6608090/1

[6] Scott Fahlman "SCONE"

http://www.cs.cmu.edu/~sef/scone/

On 9/27/21, Tim Daly <axiomcas@gmail.com> wrote:

> I have tried to maintain a list of names of people who have

> helped Axiom, going all the way back to the pre-Scratchpad

> days. The names are listed at the beginning of each book.

> I also maintain a bibliography of publications I've read or

> that have had an indirect influence on Axiom.

>

> Credit is "the coin of the realm". It is easy to share and w= rong

> to ignore. It is especially damaging to those in Academia who

> are affected by credit and citations in publications.

>

> Apparently I'm not the only person who feels that way. The ACM

> Turing award seems to have ignored a lot of work:

>

> Scientific Integrity, the 2021 Turing Lecture, and the 2018 Turing

> Award for Deep Learning

> https://pe= ople.idsia.ch/~juergen/scientific-integrity-turing-award-deep-learning.html=

>

> I worked on an AI problem at IBM Research called Ketazolam.

> (https://en.wikipedia.org/wiki/Ketazolam). The idea = was to recognize

> and associated 3D chemical drawings with their drug counterparts.

> I used Rumelhart, and McClelland's books. These books contained

> quite a few ideas that seem to be "new and innovative" among= the

> machine learning crowd... but the books are from 1987. I don't bel= ieve

> I've seen these books mentioned in any recent bibliography.

> https://mitpress.mit.edu= /books/parallel-distributed-processing-volume-1

>

>

>

>

> On 9/27/21, Tim Daly <axiomcas@gmail.com> wrote:

>> Greg Wilson asked "How Reliable is Scientific Software?"=

>> https://ne= verworkintheory.org/2021/09/25/how-reliable-is-scientific-software.html=

>>

>> which is a really interesting read. For example"

>>

>>=C2=A0 [Hatton1994], is now a quarter of a century old, but its con= clusions

>> are still fresh. The authors fed the same data into nine commercia= l

>> geophysical software packages and compared the results; they found=

>> that, "numerical disagreement grows at around the rate of 1% = in

>> average absolute difference per 4000 fines of implemented code, an= d,

>> even worse, the nature of the disagreement is nonrandom" (i.e= ., the

>> authors of different packages make similar mistakes).

>>

>>

>> On 9/26/21, Tim Daly <axiomcas@gmail.com> wrote:

>>> I should note that the lastest board I've just unboxed

>>> (a PYNQ-Z2) is a Zynq Z-7020 chip from Xilinx (AMD).

>>>

>>> What makes it interesting is that it contains 2 hard

>>> core processors and an FPGA, connected by 9 paths

>>> for communication. The processors can be run

>>> independently so there is the possibility of a parallel

>>> version of some Axiom algorithms (assuming I had

>>> the time, which I don't).

>>>

>>> Previously either the hard (physical) processor was

>>> separate from the FPGA with minimal communication

>>> or the soft core processor had to be created in the FPGA

>>> and was much slower.

>>>

>>> Now the two have been combined in a single chip.

>>> That means that my effort to run a proof checker on

>>> the FPGA and the algorithm on the CPU just got to

>>> the point where coordination is much easier.

>>>

>>> Now all I have to do is figure out how to program this

>>> beast.

>>>

>>> There is no such thing as a simple job.

>>>

>>> Tim

>>>

>>>

>>> On 9/26/21, Tim Daly <axiomcas@gmail.com> wrote:

>>>> I'm familiar with most of the traditional approaches>>>> like Theorema. The bibliography contains most of the

>>>> more interesting sources. [0]

>>>>

>>>> There is a difference between traditional approaches to

>>>> connecting computer algebra and proofs and my approach.

>>>>

>>>> Proving an algorithm, like the GCD, in Axiom is hard.

>>>> There are many GCDs (e.g. NNI vs POLY) and there

>>>> are theorems and proofs passed at runtime in the

>>>> arguments of the newly constructed domains. This

>>>> involves a lot of dependent type theory and issues of

>>>> compile time / runtime argument evaluation. The issues

>>>> that arise are difficult and still being debated in the ty= pe

>>>> theory community.

>>>>

>>>> I am putting the definitions, theorems, and proofs (DTP)>>>> directly into the category/domain hierarchy. Each category=

>>>> will have the DTP specific to it. That way a commutative>>>> domain will inherit a commutative theorem and a

>>>> non-commutative domain will not.

>>>>

>>>> Each domain will have additional DTPs associated with

>>>> the domain (e.g. NNI vs Integer) as well as any DTPs

>>>> it inherits from the category hierarchy. Functions in the<= br> >>>> domain will have associated DTPs.

>>>>

>>>> A function to be proven will then inherit all of the relev= ant

>>>> DTPs. The proof will be attached to the function and

>>>> both will be sent to the hardware (proof-carrying code).>>>>

>>>> The proof checker, running on a field programmable

>>>> gate array (FPGA), will be checked at runtime in

>>>> parallel with the algorithm running on the CPU

>>>> (aka "trust down to the metal"). (Note that Inte= l

>>>> and AMD have built CPU/FPGA combined chips,

>>>> currently only available in the cloud.)

>>>>

>>>>

>>>>

>>>> I am (slowly) making progress on the research.

>>>>

>>>> I have the hardware and nearly have the proof

>>>> checker from LEAN running on my FPGA.

>>>>

>>>> I'm in the process of spreading the DTPs from

>>>> LEAN across the category/domain hierarchy.

>>>>

>>>> The current Axiom build extracts all of the functions

>>>> but does not yet have the DTPs.

>>>>

>>>> I have to restructure the system, including the compiler>>>> and interpreter to parse and inherit the DTPs. I

>>>> have some of that code but only some of the code

>>>> has been pushed to the repository (volume 15) but

>>>> that is rather trivial, out of date, and incomplete.

>>>>

>>>> I'm clearly not smart enough to prove the Risch

>>>> algorithm and its associated machinery but the needed

>>>> definitions and theorems will be available to someone

>>>> who wants to try.

>>>>

>>>> [0] https://github.com/daly/= PDFS/blob/master/bookvolbib.pdf

>>>>

>>>>

>>>> On 8/19/21, Tim Daly <axiomcas@gmail.com> wrote:

>>>>> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D

>>>>>

>>>>> REVIEW (Axiom on WSL2 Windows)

>>>>>

>>>>>

>>>>> So the steps to run Axiom from a Windows desktop

>>>>>

>>>>> 1 Windows) install XMing on Windows for X11 server

>>>>>

>>>>> http://www.straightrunning.com/XmingN= otes/

>>>>>

>>>>> 2 WSL2) Install Axiom in WSL2

>>>>>

>>>>> sudo apt install axiom

>>>>>

>>>>> 3 WSL2) modify /usr/bin/axiom to fix the bug:

>>>>> (someone changed the axiom startup script.

>>>>> It won't work on WSL2. I don't know who or

>>>>> how to get it fixed).

>>>>>

>>>>> sudo emacs /usr/bin/axiom

>>>>>

>>>>> (split the line into 3 and add quote marks)

>>>>>

>>>>> export SPADDEFAULT=3D/usr/local/axiom/mnt/linux

>>>>> export AXIOM=3D/usr/lib/axiom-20170501

>>>>> export "PATH=3D/usr/lib/axiom-20170501/bin:$PATH&= quot;

>>>>>

>>>>> 4 WSL2) create a .axiom.input file to include startup = cmds:

>>>>>

>>>>> emacs .axiom.input

>>>>>

>>>>> )cd "/mnt/c/yourpath"

>>>>> )sys pwd

>>>>>

>>>>> 5 WSL2) create a "myaxiom" command that sets= the

>>>>>=C2=A0 =C2=A0 =C2=A0DISPLAY variable and starts axiom>>>>>

>>>>> emacs myaxiom

>>>>>

>>>>> #! /bin/bash

>>>>> export DISPLAY=3D:0.0

>>>>> axiom

>>>>>

>>>>> 6 WSL2) put it in the /usr/bin directory

>>>>>

>>>>> chmod +x myaxiom

>>>>> sudo cp myaxiom /usr/bin/myaxiom

>>>>>

>>>>> 7 WINDOWS) start the X11 server

>>>>>

>>>>> (XMing XLaunch Icon on your desktop)

>>>>>

>>>>> 8 WINDOWS) run myaxiom from PowerShell

>>>>> (this should start axiom with graphics available)

>>>>>

>>>>> wsl myaxiom

>>>>>

>>>>> 8 WINDOWS) make a PowerShell desktop

>>>>>

>>>>> https://superuser.com/questions/886951/run-powershell-script-when-you= -open-powershell

>>>>>

>>>>> Tim

>>>>>

>>>>> On 8/13/21, Tim Daly <axiomcas@gmail.com> wrote:

>>>>>> A great deal of thought is directed toward making = the SANE version

>>>>>> of Axiom as flexible as possible, decoupling mecha= nism from theory.

>>>>>>

>>>>>> An interesting publication by Brian Cantwell Smith= [0], "Reflection

>>>>>> and Semantics in LISP" seems to contain inter= esting ideas related

>>>>>> to our goal. Of particular interest is the ability= to reason about

>>>>>> and

>>>>>> perform self-referential manipulations. In a depen= dently-typed

>>>>>> system it seems interesting to be able "adapt= " code to handle

>>>>>> run-time computed arguments to dependent functions= . The abstract:

>>>>>>

>>>>>>=C2=A0 =C2=A0 "We show how a computational sys= tem can be constructed to

>>>>>> "reason",

>>>>>> effectively

>>>>>>=C2=A0 =C2=A0 and consequentially, about its own in= ferential processes. The

>>>>>> analysis proceeds in two

>>>>>>=C2=A0 =C2=A0 parts. First, we consider the general= question of computational

>>>>>> semantics, rejecting

>>>>>>=C2=A0 =C2=A0 traditional approaches, and arguing t= hat the declarative and

>>>>>> procedural aspects of

>>>>>>=C2=A0 =C2=A0 computational symbols (what they stan= d for, and what behaviour

>>>>>> they

>>>>>> engender) should be

>>>>>>=C2=A0 =C2=A0 analysed independently, in order that= they may be coherently

>>>>>> related. Second, we

>>>>>>=C2=A0 =C2=A0 investigate self-referential behavior= in computational processes,

>>>>>> and show how to embed an

>>>>>>=C2=A0 =C2=A0 effective procedural model of a compu= tational calculus within that

>>>>>> calculus (a model not

>>>>>>=C2=A0 =C2=A0 unlike a meta-circular interpreter, b= ut connected to the

>>>>>> fundamental operations of the

>>>>>>=C2=A0 =C2=A0 machine in such a way as to provide, = at any point in a

>>>>>> computation,

>>>>>> fully articulated

>>>>>>=C2=A0 =C2=A0 descriptions of the state of that com= putation, for inspection and

>>>>>> possible modification). In

>>>>>>=C2=A0 =C2=A0 terms of the theories that result fro= m these investigations, we

>>>>>> present a general architecture

>>>>>>=C2=A0 =C2=A0 for procedurally reflective processes= , able to shift smoothly

>>>>>> between dealing with a given

>>>>>>=C2=A0 =C2=A0 subject domain, and dealing with thei= r own reasoning processes

>>>>>> over

>>>>>> that domain.

>>>>>>

>>>>>>=C2=A0 =C2=A0 An instance of the general solution i= s worked out in the context

>>>>>> of

>>>>>> an applicative

>>>>>>=C2=A0 =C2=A0 language. Specifically, we present th= ree successive dialects of

>>>>>> LISP: 1-LISP, a distillation of

>>>>>>=C2=A0 =C2=A0 current practice, for comparison purp= oses; 2-LISP, a dialect

>>>>>> constructed in terms of our

>>>>>>=C2=A0 =C2=A0 rationalised semantics, in which the = concept of evaluation is

>>>>>> rejected in favour of

>>>>>>=C2=A0 =C2=A0 independent notions of simplification= and reference, and in which

>>>>>> the respective categories

>>>>>>=C2=A0 =C2=A0 of notation, structure, semantics, an= d behaviour are strictly

>>>>>> aligned; and 3-LISP, an

>>>>>>=C2=A0 =C2=A0 extension of 2-LISP endowed with refl= ective powers."

>>>>>>

>>>>>> Axiom SANE builds dependent types on the fly. The = ability to access

>>>>>> both the refection

>>>>>> of the tower of algebra and the reflection of the = tower of proofs at

>>>>>> the time of construction

>>>>>> makes the construction of a new domain or specific= algorithm easier

>>>>>> and more general.

>>>>>>

>>>>>> This is of particular interest because one of the = efforts is to build

>>>>>> "all the way down to the

>>>>>> metal". If each layer is constructed on top o= f previous proven layers

>>>>>> and the new layer

>>>>>> can "reach below" to lower layers then t= he tower of layers can be

>>>>>> built without duplication.

>>>>>>

>>>>>> Tim

>>>>>>

>>>>>> [0], Smith, Brian Cantwell "Reflection and Se= mantics in LISP"

>>>>>> POPL '84: Proceedings of the 11th ACM SIGACT-S= IGPLAN

>>>>>> ymposium on Principles of programming languagesJan= uary 1

>>>>>> 984 Pages 23=E2=80=9335https://doi.org= /10.1145/800017.800513

>>>>>>

>>>>>> On 6/29/21, Tim Daly <axiomcas@gmail.com> wrote:

>>>>>>> Having spent time playing with hardware it is = perfectly clear that

>>>>>>> future computational mathematics efforts need = to adapt to using

>>>>>>> parallel processing.

>>>>>>>

>>>>>>> I've spent a fair bit of time thinking abo= ut structuring Axiom to

>>>>>>> be parallel. Most past efforts have tried to f= ocus on making a

>>>>>>> particular algorithm parallel, such as a matri= x multiply.

>>>>>>>

>>>>>>> But I think that it might be more effective to= make each domain

>>>>>>> run in parallel. A computation crosses multipl= e domains so a

>>>>>>> particular computation could involve multiple = parallel copies.

>>>>>>>

>>>>>>> For example, computing the Cylindrical Algebra= ic Decomposition

>>>>>>> could recursively decompose the plane. Indeed,= any tree-recursive

>>>>>>> algorithm could be run in parallel "in th= e large" by creating new

>>>>>>> running copies of the domain for each sub-prob= lem.

>>>>>>>

>>>>>>> So the question becomes, how does one manage t= his?

>>>>>>>

>>>>>>> A similar problem occurs in robotics where one= could have multiple

>>>>>>> wheels, arms, propellers, etc. that need to ac= t independently but

>>>>>>> in coordination.

>>>>>>>

>>>>>>> The robot solution uses ROS2. The three ideas = are ROSCORE,

>>>>>>> TOPICS with publish/subscribe, and SERVICES wi= th request/response.

>>>>>>> These are communication paths defined between = processes.

>>>>>>>

>>>>>>> ROS2 has a "roscore" which is basica= lly a phonebook of "topics".

>>>>>>> Any process can create or look up the current = active topics. eq:

>>>>>>>

>>>>>>>=C2=A0 =C2=A0 rosnode list

>>>>>>>

>>>>>>> TOPICS:

>>>>>>>

>>>>>>> Any process can PUBLISH a topic (which is basi= cally a typed data

>>>>>>> structure), e.g the topic /hw with the String = data "Hello World".

>>>>>>> eg:

>>>>>>>

>>>>>>>=C2=A0 =C2=A0 rostopic pub /hw std_msgs/String = "Hello, World"

>>>>>>>

>>>>>>> Any process can SUBSCRIBE to a topic, such as = /hw, and get a

>>>>>>> copy of the data.=C2=A0 eg:

>>>>>>>

>>>>>>>=C2=A0 =C2=A0 rostopic echo /hw=C2=A0 =C2=A0=3D= =3D> "Hello, World"

>>>>>>>

>>>>>>> Publishers talk, subscribers listen.

>>>>>>>

>>>>>>>

>>>>>>> SERVICES:

>>>>>>>

>>>>>>> Any process can make a REQUEST of a SERVICE an= d get a RESPONSE.

>>>>>>> This is basically a remote function call.

>>>>>>>

>>>>>>>

>>>>>>>

>>>>>>> Axiom in parallel?

>>>>>>>

>>>>>>> So domains could run, each in its own process.= It could provide

>>>>>>> services, one for each function. Any other pro= cess could request

>>>>>>> a computation and get the result as a response= . Domains could

>>>>>>> request services from other domains, either wa= iting for responses

>>>>>>> or continuing while the response is being comp= uted.

>>>>>>>

>>>>>>> The output could be sent anywhere, to a termin= al, to a browser,

>>>>>>> to a network, or to another process using the = publish/subscribe

>>>>>>> protocol, potentially all at the same time sin= ce there can be many

>>>>>>> subscribers to a topic.

>>>>>>>

>>>>>>> Available domains could be dynamically added b= y announcing

>>>>>>> themselves as new "topics" and could= be dynamically looked-up

>>>>>>> at runtime.

>>>>>>>

>>>>>>> This structure allows function-level / domain-= level parallelism.

>>>>>>> It is very effective in the robot world and I = think it might be a

>>>>>>> good structuring mechanism to allow computatio= nal mathematics

>>>>>>> to take advantage of multiple processors in a = disciplined fashion.

>>>>>>>

>>>>>>> Axiom has a thousand domains and each could ru= n on its own core.

>>>>>>>

>>>>>>> In addition. notice that each domain is indepe= ndent of the others.

>>>>>>> So if we want to use BLAS Fortran code, it cou= ld just be another

>>>>>>> service node. In fact, any "foreign funct= ion" could transparently

>>>>>>> cooperate in a distributed Axiom.

>>>>>>>

>>>>>>> Another key feature is that proofs can be &quo= t;by node".

>>>>>>>

>>>>>>> Tim

>>>>>>>

>>>>>>>

>>>>>>>

>>>>>>>

>>>>>>> On 6/5/21, Tim Daly <axiomcas@gmail.com> wrote:

>>>>>>>> Axiom is based on first-class dependent ty= pes. Deciding when

>>>>>>>> two types are equivalent may involve compu= tation. See

>>>>>>>> Christiansen, David Thrane "Checking = Dependent Types with

>>>>>>>> Normalization by Evaluation" (2019)>>>>>>>>

>>>>>>>> This puts an interesting constraint on bui= lding types. The

>>>>>>>> constructed types has to export a function= to decide if a

>>>>>>>> given type is "equivalent" to it= self.

>>>>>>>>

>>>>>>>> The notion of "equivalence" migh= t involve category ideas

>>>>>>>> of natural transformation and univalence. = Sigh.

>>>>>>>>

>>>>>>>> That's an interesting design point.

>>>>>>>>

>>>>>>>> Tim

>>>>>>>>

>>>>>>>>

>>>>>>>> On 5/5/21, Tim Daly <axiomcas@gmail.com> wrote:

>>>>>>>>> It is interesting that programmer'= s eyes and expectations adapt

>>>>>>>>> to the tools they use. For instance, I= use emacs and expect to

>>>>>>>>> work directly in files and multiple bu= ffers. When I try to use one

>>>>>>>>> of the many IDE tools I find they tend= to "get in the way". I

>>>>>>>>> already

>>>>>>>>> know or can quickly find whatever they= try to tell me. If you use

>>>>>>>>> an

>>>>>>>>> IDE you probably find emacs "too = sparse" for programming.

>>>>>>>>>

>>>>>>>>> Recently I've been working in a sp= arse programming environment.

>>>>>>>>> I'm exploring the question of runn= ing a proof checker in an FPGA.

>>>>>>>>> The FPGA development tools are painful= at best and not intuitive

>>>>>>>>> since you SEEM to be programming but y= ou're actually describing

>>>>>>>>> hardware gates, connections, and timin= g. This is an environment

>>>>>>>>> where everything happens all-at-once a= nd all-the-time (like the

>>>>>>>>> circuits in your computer). It is the = "assembly language of

>>>>>>>>> circuits".

>>>>>>>>> Naturally, my eyes have adapted to thi= s rather raw level.

>>>>>>>>>

>>>>>>>>> That said, I'm normally doing lite= rate programming all the time.

>>>>>>>>> My typical file is a document which is= a mixture of latex and

>>>>>>>>> lisp.

>>>>>>>>> It is something of a shock to return t= o that world. It is clear

>>>>>>>>> why

>>>>>>>>> people who program in Python find lisp= to be a "sea of parens".

>>>>>>>>> Yet as a lisp programmer, I don't = even see the parens, just code.

>>>>>>>>>

>>>>>>>>> It takes a few minutes in a literate d= ocument to adapt vision to

>>>>>>>>> see the latex / lisp combination as na= tural. The latex markup,

>>>>>>>>> like the lisp parens, eventually just = disappears. What remains

>>>>>>>>> is just lisp and natural language text= .

>>>>>>>>>

>>>>>>>>> This seems painful at first but eyes q= uickly adapt. The upside

>>>>>>>>> is that there is always a "finish= ed" document that describes the

>>>>>>>>> state of the code. The overhead of wri= ting a paragraph to

>>>>>>>>> describe a new function or change a pa= ragraph to describe the

>>>>>>>>> changed function is very small.

>>>>>>>>>

>>>>>>>>> Using a Makefile I latex the document = to generate a current PDF

>>>>>>>>> and then I extract, load, and execute = the code. This loop catches

>>>>>>>>> errors in both the latex and the sourc= e code. Keeping an open file

>>>>>>>>> in

>>>>>>>>> my pdf viewer shows all of the changes= in the document after every

>>>>>>>>> run of make. That way I can edit the b= ook as easily as the code.

>>>>>>>>>

>>>>>>>>> Ultimately I find that writing the boo= k while writing the code is

>>>>>>>>> more productive. I don't have to r= emember why I wrote something

>>>>>>>>> since the explanation is already there= .

>>>>>>>>>

>>>>>>>>> We all have our own way of programming= and our own tools.

>>>>>>>>> But I find literate programming to be = a real advance over IDE

>>>>>>>>> style programming and "raw code&q= uot; programming.

>>>>>>>>>

>>>>>>>>> Tim

>>>>>>>>>

>>>>>>>>>

>>>>>>>>> On 2/27/21, Tim Daly <axiomcas@gmail.com> wrote= :

>>>>>>>>>> The systems I use have the interes= ting property of

>>>>>>>>>> "Living within the compiler&q= uot;.

>>>>>>>>>>

>>>>>>>>>> Lisp, Forth, Emacs, and other syst= ems that present themselves

>>>>>>>>>> through the Read-Eval-Print-Loop (= REPL) allow the

>>>>>>>>>> ability to deeply interact with th= e system, shaping it to your

>>>>>>>>>> need.

>>>>>>>>>>

>>>>>>>>>> My current thread of study is soft= ware architecture. See

>>>>>>>>>> https://www.youtube.com/watch?v=3DW2hagw1VhhI&feature=3Dyoutu.= be

>>>>>>>>>> and https://www.geor= gefairbanks.com/videos/

>>>>>>>>>>

>>>>>>>>>> My current thinking on SANE involv= es the ability to

>>>>>>>>>> dynamically define categories, rep= resentations, and functions

>>>>>>>>>> along with "composition funct= ions" that permits choosing a

>>>>>>>>>> combination at the time of use.

>>>>>>>>>>

>>>>>>>>>> You might want a domain for handli= ng polynomials. There are

>>>>>>>>>> a lot of choices, depending on you= r use case. You might want

>>>>>>>>>> different representations. For exa= mple, you might want dense,

>>>>>>>>>> sparse, recursive, or "machin= e compatible fixnums" (e.g. to

>>>>>>>>>> interface with C code). If these d= on't exist it ought to be

>>>>>>>>>> possible

>>>>>>>>>> to create them. Such "lego-li= ke" building blocks require careful

>>>>>>>>>> thought about creating "fully= factored" objects.

>>>>>>>>>>

>>>>>>>>>> Given that goal, the traditional b= arrier of "compiler" vs

>>>>>>>>>> "interpreter"

>>>>>>>>>> does not seem useful. It is better= to "live within the compiler"

>>>>>>>>>> which

>>>>>>>>>> gives the ability to define new th= ings "on the fly".

>>>>>>>>>>

>>>>>>>>>> Of course, the SANE compiler is go= ing to want an associated

>>>>>>>>>> proof of the functions you create = along with the other parts

>>>>>>>>>> such as its category hierarchy and= representation properties.

>>>>>>>>>>

>>>>>>>>>> There is no such thing as a simple= job. :-)

>>>>>>>>>>

>>>>>>>>>> Tim

>>>>>>>>>>

>>>>>>>>>>

>>>>>>>>>> On 2/18/21, Tim Daly <axiomcas@gmail.com>= wrote:

>>>>>>>>>>> The Axiom SANE compiler / inte= rpreter has a few design points.

>>>>>>>>>>>

>>>>>>>>>>> 1) It needs to mix interpreted= and compiled code in the same

>>>>>>>>>>> function.

>>>>>>>>>>> SANE allows dynamic constructi= on of code as well as dynamic type

>>>>>>>>>>> construction at runtime. Both = of these can occur in a runtime

>>>>>>>>>>> object.

>>>>>>>>>>> So there is potentially a mixt= ure of interpreted and compiled

>>>>>>>>>>> code.

>>>>>>>>>>>

>>>>>>>>>>> 2) It needs to perform type re= solution at compile time without

>>>>>>>>>>> overhead

>>>>>>>>>>> where possible. Since this is = not always possible there needs to

>>>>>>>>>>> be

>>>>>>>>>>> a "prefix thunk" tha= t will perform the resolution. Trivially,

>>>>>>>>>>> for

>>>>>>>>>>> example,

>>>>>>>>>>> if we have a + function we nee= d to type-resolve the arguments.

>>>>>>>>>>>

>>>>>>>>>>> However, if we can prove at co= mpile time that the types are both

>>>>>>>>>>> bounded-NNI and the result is = bounded-NNI (i.e. fixnum in lisp)

>>>>>>>>>>> then we can inline a call to += at runtime. If not, we might have

>>>>>>>>>>> + applied to NNI and POLY(FLOA= T), which requires a thunk to

>>>>>>>>>>> resolve types. The thunk could= even "specialize and compile"

>>>>>>>>>>> the code before executing it.<= br> >>>>>>>>>>>

>>>>>>>>>>> It turns out that the Forth im= plementation of

>>>>>>>>>>> "threaded-interpreted&quo= t;

>>>>>>>>>>> languages model provides an ef= ficient and effective way to do

>>>>>>>>>>> this.[0]

>>>>>>>>>>> Type resolution can be "i= nserted" in intermediate thunks.

>>>>>>>>>>> The model also supports dynami= c overloading and tail recursion.

>>>>>>>>>>>

>>>>>>>>>>> Combining high-level CLOS code= with low-level threading gives an

>>>>>>>>>>> easy to understand and robust = design.

>>>>>>>>>>>

>>>>>>>>>>> Tim

>>>>>>>>>>>

>>>>>>>>>>> [0] Loeliger, R.G. "Threa= ded Interpretive Languages" (1981)

>>>>>>>>>>> ISBN 0-07-038360-X

>>>>>>>>>>>

>>>>>>>>>>>

>>>>>>>>>>>

>>>>>>>>>>>

>>>>>>>>>>> On 2/5/21, Tim Daly <axiomcas@gmail.com>= ; wrote:

>>>>>>>>>>>> I've worked hard to ma= ke Axiom depend on almost no other

>>>>>>>>>>>> tools so that it would not= get caught by "code rot" of

>>>>>>>>>>>> libraries.

>>>>>>>>>>>>

>>>>>>>>>>>> However, I'm also tryi= ng to make the new SANE version much

>>>>>>>>>>>> easier to understand and d= ebug.To that end I've been

>>>>>>>>>>>> experimenting

>>>>>>>>>>>> with some ideas.

>>>>>>>>>>>>

>>>>>>>>>>>> It should be possible to v= iew source code, of course. But the

>>>>>>>>>>>> source

>>>>>>>>>>>> code is not the only, nor = possibly the best, representation of

>>>>>>>>>>>> the

>>>>>>>>>>>> ideas.

>>>>>>>>>>>> In particular, source code= gets compiled into data structures.

>>>>>>>>>>>> In

>>>>>>>>>>>> Axiom

>>>>>>>>>>>> these data structures real= ly are a graph of related structures.

>>>>>>>>>>>>

>>>>>>>>>>>> For example, looking at th= e gcd function from NNI, there is the

>>>>>>>>>>>> representation of the gcd = function itself. But there is also a

>>>>>>>>>>>> structure

>>>>>>>>>>>> that is the REP (and, in t= he new system, is separate from the

>>>>>>>>>>>> domain).

>>>>>>>>>>>>

>>>>>>>>>>>> Further, there are associa= ted specification and proof

>>>>>>>>>>>> structures.

>>>>>>>>>>>> Even

>>>>>>>>>>>> further, the domain inheri= ts the category structures, and from

>>>>>>>>>>>> those

>>>>>>>>>>>> it

>>>>>>>>>>>> inherits logical axioms an= d definitions through the proof

>>>>>>>>>>>> structure.

>>>>>>>>>>>>

>>>>>>>>>>>> Clearly the gcd function i= s a node in a much larger graph

>>>>>>>>>>>> structure.

>>>>>>>>>>>>

>>>>>>>>>>>> When trying to decide why = code won't compile it would be useful

>>>>>>>>>>>> to

>>>>>>>>>>>> be able to see and walk th= ese structures. I've thought about

>>>>>>>>>>>> using

>>>>>>>>>>>> the

>>>>>>>>>>>> browser but browsers are t= oo weak. Either everything has to be

>>>>>>>>>>>> "in

>>>>>>>>>>>> a

>>>>>>>>>>>> single tab to show the gra= ph" or "the nodes of the graph are in

>>>>>>>>>>>> different

>>>>>>>>>>>> tabs". Plus, construc= ting dynamic graphs that change as the

>>>>>>>>>>>> software

>>>>>>>>>>>> changes (e.g. by loading a= new spad file or creating a new

>>>>>>>>>>>> function)

>>>>>>>>>>>> represents the huge proble= m of keeping the browser "in sync

>>>>>>>>>>>> with

>>>>>>>>>>>> the

>>>>>>>>>>>> Axiom workspace". So = something more dynamic and embedded is

>>>>>>>>>>>> needed.

>>>>>>>>>>>>

>>>>>>>>>>>> Axiom source gets compiled= into CLOS data structures. Each of

>>>>>>>>>>>> these

>>>>>>>>>>>> new SANE structures has an= associated surface representation,

>>>>>>>>>>>> so

>>>>>>>>>>>> they

>>>>>>>>>>>> can be presented in user-f= riendly form.

>>>>>>>>>>>>

>>>>>>>>>>>> Also, since Axiom is liter= ate software, it should be possible

>>>>>>>>>>>> to

>>>>>>>>>>>> look

>>>>>>>>>>>> at

>>>>>>>>>>>> the code in its literate f= orm with the surrounding explanation.

>>>>>>>>>>>>

>>>>>>>>>>>> Essentially we'd like = to have the ability to "deep dive" into

>>>>>>>>>>>> the

>>>>>>>>>>>> Axiom

>>>>>>>>>>>> workspace, not only for de= bugging, but also for understanding

>>>>>>>>>>>> what

>>>>>>>>>>>> functions are used, where = they come from, what they inherit,

>>>>>>>>>>>> and

>>>>>>>>>>>> how they are used in a com= putation.

>>>>>>>>>>>>

>>>>>>>>>>>> To that end I'm lookin= g at using McClim, a lisp windowing

>>>>>>>>>>>> system.

>>>>>>>>>>>> Since the McClim windows w= ould be part of the lisp image, they

>>>>>>>>>>>> have

>>>>>>>>>>>> access to display (and mod= ify) the Axiom workspace at all

>>>>>>>>>>>> times.

>>>>>>>>>>>>

>>>>>>>>>>>> The only hesitation is tha= t McClim uses quicklisp and drags in

>>>>>>>>>>>> a

>>>>>>>>>>>> lot

>>>>>>>>>>>> of other subsystems. It= 9;s all lisp, of course.

>>>>>>>>>>>>

>>>>>>>>>>>> These ideas aren't new= . They were available on Symbolics

>>>>>>>>>>>> machines,

>>>>>>>>>>>> a truly productive platfor= m and one I sorely miss.

>>>>>>>>>>>>

>>>>>>>>>>>> Tim

>>>>>>>>>>>>

>>>>>>>>>>>>

>>>>>>>>>>>>

>>>>>>>>>>>> On 1/19/21, Tim Daly <<= a href=3D"mailto:axiomcas@gmail.com" target=3D"_blank">axiomcas@gmail.com> wrote:

>>>>>>>>>>>>> Also of interest is th= e talk

>>>>>>>>>>>>> "The Unreasonable= Effectiveness of Dynamic Typing for

>>>>>>>>>>>>> Practical

>>>>>>>>>>>>> Programs"

>>>>>>>>>>>>> https://vimeo.com/743= 54480

>>>>>>>>>>>>> which questions whethe= r static typing really has any benefit.

>>>>>>>>>>>>>

>>>>>>>>>>>>> Tim

>>>>>>>>>>>>>

>>>>>>>>>>>>>

>>>>>>>>>>>>> On 1/19/21, Tim Daly &= lt;axiomcas@gmail.c= om> wrote:

>>>>>>>>>>>>>> Peter Naur wrote a= n article of interest:

>>>>>>>>>>>>>> htt= p://pages.cs.wisc.edu/~remzi/Naur.pdf

>>>>>>>>>>>>>>

>>>>>>>>>>>>>> In particular, it = mirrors my notion that Axiom needs

>>>>>>>>>>>>>> to embrace literat= e programming so that the "theory

>>>>>>>>>>>>>> of the problem&quo= t; is presented as well as the "theory

>>>>>>>>>>>>>> of the solution&qu= ot;. I quote the introduction:

>>>>>>>>>>>>>>

>>>>>>>>>>>>>>

>>>>>>>>>>>>>>

>>>>>>>>>>>>>> This article is, t= o my mind, the most accurate account

>>>>>>>>>>>>>> of what goes on in= designing and coding a program.

>>>>>>>>>>>>>> I refer to it regu= larly when discussing how much

>>>>>>>>>>>>>> documentation to c= reate, how to pass along tacit

>>>>>>>>>>>>>> knowledge, and the= value of the XP's metaphor-setting

>>>>>>>>>>>>>> exercise. It also = provides a way to examine a methodolgy's

>>>>>>>>>>>>>> economic structure= .

>>>>>>>>>>>>>>

>>>>>>>>>>>>>> In the article, wh= ich follows, note that the quality of the

>>>>>>>>>>>>>> designing programm= er's work is related to the quality of

>>>>>>>>>>>>>> the match between = his theory of the problem and his theory

>>>>>>>>>>>>>> of the solution. N= ote that the quality of a later

>>>>>>>>>>>>>> programmer's>>>>>>>>>>>>>> work is related to= the match between his theories and the

>>>>>>>>>>>>>> previous programme= r's theories.

>>>>>>>>>>>>>>

>>>>>>>>>>>>>> Using Naur's i= deas, the designer's job is not to pass along

>>>>>>>>>>>>>> "the design&q= uot; but to pass along "the theories" driving the

>>>>>>>>>>>>>> design.

>>>>>>>>>>>>>> The latter goal is= more useful and more appropriate. It also

>>>>>>>>>>>>>> highlights that kn= owledge of the theory is tacit in the

>>>>>>>>>>>>>> owning,

>>>>>>>>>>>>>> and

>>>>>>>>>>>>>> so passing along t= he thoery requires passing along both

>>>>>>>>>>>>>> explicit

>>>>>>>>>>>>>> and tacit knowledg= e.

>>>>>>>>>>>>>>

>>>>>>>>>>>>>> Tim

>>>>>>>>>>>>>>

>>>>>>>>>>>>>

>>>>>>>>>>>>

>>>>>>>>>>>

>>>>>>>>>>

>>>>>>>>>

>>>>>>>>

>>>>>>>

>>>>>>

>>>>>

>>>>

>>>

>>

>