From MAILER-DAEMON Fri Mar 11 09:47:17 2022 Received: from list by lists.gnu.org with archive (Exim 4.90_1) id 1nSgY1-0002j5-Oq for mharc-axiom-developer@gnu.org; Fri, 11 Mar 2022 09:47:17 -0500 Received: from eggs.gnu.org ([209.51.188.92]:48090) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1nSgXz-0002ee-9g for axiom-developer@nongnu.org; Fri, 11 Mar 2022 09:47:15 -0500 Received: from [2607:f8b0:4864:20::836] (port=34768 helo=mail-qt1-x836.google.com) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1nSgXq-0005ct-OC for axiom-developer@nongnu.org; Fri, 11 Mar 2022 09:47:14 -0500 Received: by mail-qt1-x836.google.com with SMTP id c4so7465476qtx.1 for ; Fri, 11 Mar 2022 06:47:06 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to; bh=TlzVtsghho+evV8dzPHj0qM+DTaC1RrWd6dKBdQ1G48=; b=cscfTINylB2Qb9810iF9fXS0OYs1TnpC6jGMILnzOopM+JIHMvWhiLCuMNACmQXxZw oPKvSL9RTxT5W5z9izdLbP5fyMmMKWDf0KQLzI9sFQlHCr3Uegce+ESB9/yzI550bd17 dy+JNzKSltbgPBmEBca9u/L/UIrJ/AaiRweNLmjtLQGUcsRjViJGkC0K+r9GMEYsGp9L O1ctukeA6dfWzIbhnmEsomAiFMJxXrs0x/TN0X/wFlqrbe778tCebQ75+SvhxzOM/Fu2 ykZz3eSSw7jSiswZAO4UtgC3Hp+44tFobkRFMyaquP5MOwhoet3EAFh5VvIBFzrokAZF 0HyA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to; bh=TlzVtsghho+evV8dzPHj0qM+DTaC1RrWd6dKBdQ1G48=; b=XLbSW43L8EVQZjd7XeWZkiRBe3ArGlhf0JRIeb0M5nSHTJFMHg/YRjqSYa4hqW2erN lyllqE5Za4Ed8I+IiUhuLPmUKQv73Gz6rDny7ae7m/myLowazGQDwqjp2G6NtcKpyFWc P2ycSoNEksSf8BhUqrwsPD41o9zNjKR/19UewrqpUaPaaI0KOQ0sNWzTQs+CXVdInmS5 VTAbHtoF8QUFZY3YiUDneVYyx2KM3DtvA9CtqLHOOhkVbXFJ1tlbXhfG1nkEGsGJeffy cQEHub+5sJg9O/ZtDJ/D7nZNeRk3KE3g9+nkdSjD28FHnEfzRjIfrRS7VGqqYS4OngT2 /KBA== X-Gm-Message-State: AOAM533xHf0Sjgu3MhL03VVuIxAFF3/oveyNSR04xu1CS/8zW9Thlb3e MFZRXJ5pii45735WrhYYpKcsl9rtuPMIEgi9v2obbrnC8n0= X-Google-Smtp-Source: ABdhPJyIdy26V8TbS4g++i9Yv86KgSijBaxAy67P/eo8Q7/sKLw9yzppPtZCuoAuZMxH8lW74HLdJdUdE6ODBAJewXE= X-Received: by 2002:aed:3169:0:b0:2e1:b92e:7b7 with SMTP id 96-20020aed3169000000b002e1b92e07b7mr4376310qtg.211.1647010024821; Fri, 11 Mar 2022 06:47:04 -0800 (PST) MIME-Version: 1.0 References: <1601071096648.17321@ccny.cuny.edu> <20200928114929.GA1113@math.uni.wroc.pl> <1607842080069.93596@ccny.cuny.edu> <1607845528644.76141@ccny.cuny.edu> In-Reply-To: From: Tim Daly Date: Fri, 11 Mar 2022 09:46:48 -0500 Message-ID: Subject: Re: Axiom musings... To: axiom-dev , Tim Daly Content-Type: multipart/alternative; boundary="00000000000078965305d9f2670c" X-Host-Lookup-Failed: Reverse DNS lookup failed for 2607:f8b0:4864:20::836 (failed) Received-SPF: pass client-ip=2607:f8b0:4864:20::836; envelope-from=axiomcas@gmail.com; helo=mail-qt1-x836.google.com X-Spam_score_int: 6 X-Spam_score: 0.6 X-Spam_bar: / X-Spam_report: (0.6 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, FREEMAIL_FROM=0.001, HTML_MESSAGE=0.001, PDS_HP_HELO_NORDNS=0.659, RCVD_IN_DNSWL_NONE=-0.0001, RDNS_NONE=0.793, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01, URI_DOTEDU=1.246 autolearn=no autolearn_force=no X-Spam_action: no action X-BeenThere: axiom-developer@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Axiom Developers List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 11 Mar 2022 14:47:15 -0000 --00000000000078965305d9f2670c Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable The github lockout continues... I'm spending some time adding examples to source code. Any function can have ++X comments added. These will appear as examples when the function is )display For example, in PermutationGroup there is a function 'strongGenerators' defined as: strongGenerators : % -> L PERM S ++ strongGenerators(gp) returns strong generators for ++ the group gp. ++ ++X S:List(Integer) :=3D [1,2,3,4] ++X G :=3D symmetricGroup(S) ++X strongGenerators(G) Later, in the interpreter we see: )d op strongGenerators There is one exposed function called strongGenerators : [1] PermutationGroup(D2) -> List(Permutation(D2)) from PermutationGroup(D2) if D2 has SETCAT Examples of strongGenerators from PermutationGroup S:List(Integer) :=3D [1,2,3,4] G :=3D symmetricGroup(S) strongGenerators(G) This will show a working example for functions that the user can copy and use. It is especially useful to show how to construct working arguments. These "example" functions are run at build time when the make command looks like make TESTSET=3Dalltests I hope to add this documentation to all Axiom functions. In addition, the plan is to add these function calls to the usual test documentation. That means that all of these examples will be run and show their output in the final distribution (mnt/ubuntu/doc/src/input/*.dvi files) so the user can view the expected output. Tim On Fri, Feb 25, 2022 at 6:05 PM Tim Daly wrote: > It turns out that creating SPAD-looking output is trivial > in Common Lisp. Each class can have a custom print > routine so signatures and ++ comments can each be > printed with their own format. > > To ensure that I maintain compatibility I'll be printing > the categories and domains so they look like SPAD code, > at least until I get the proof technology integrated. I will > probably specialize the proof printers to look like the > original LEAN proof syntax. > > Internally, however, it will all be Common Lisp. > > Common Lisp makes so many desirable features so easy. > It is possible to trace dynamically at any level. One could > even write a trace that showed how Axiom arrived at the > solution. Any domain could have special case output syntax > without affecting any other domain so one could write a > tree-like output for proofs. Using greek characters is trivial > so the input and output notation is more mathematical. > > Tim > > > On Thu, Feb 24, 2022 at 10:24 AM Tim Daly wrote: > >> Axiom's SPAD code compiles to Common Lisp. >> The AKCL version of Common Lisp compiles to C. >> Three languages and 2 compilers is a lot to maintain. >> Further, there are very few people able to write SPAD >> and even fewer people able to maintain it. >> >> I've decided that the SANE version of Axiom will be >> implemented in pure Common Lisp. I've outlined Axiom's >> category / type hierarchy in the Common Lisp Object >> System (CLOS). I am now experimenting with re-writing >> the functions into Common Lisp. >> >> This will have several long-term effects. It simplifies >> the implementation issues. SPAD code blocks a lot of >> actions and optimizations that Common Lisp provides. >> The Common Lisp language has many more people >> who can read, modify, and maintain code. It provides for >> interoperability with other Common Lisp projects with >> no effort. Common Lisp is an international standard >> which ensures that the code will continue to run. >> >> The input / output mathematics will remain the same. >> Indeed, with the new generalizations for first-class >> dependent types it will be more general. >> >> This is a big change, similar to eliminating BOOT code >> and moving to Literate Programming. This will provide a >> better platform for future research work. Current research >> is focused on merging Axiom's computer algebra mathematics >> with Lean's proof language. The goal is to create a system for >> "computational mathematics". >> >> Research is the whole point of Axiom. >> >> Tim >> >> >> >> On Sat, Jan 22, 2022 at 9:16 PM Tim Daly wrote: >> >>> I can't stress enough how important it is to listen to Hamming's talk >>> https://www.youtube.com/watch?v=3Da1zDuOPkMSw >>> >>> Axiom will begin to die the day I stop working on it. >>> >>> However, proving Axiom correct "down to the metal", is fundamental. >>> It will merge computer algebra and logic, spawning years of new >>> research. >>> >>> Work on fundamental problems. >>> >>> Tim >>> >>> >>> On Thu, Dec 30, 2021 at 6:46 PM Tim Daly wrote: >>> >>>> One of the interesting questions when obtaining a result >>>> is "what functions were called and what was their return value?" >>>> Otherwise known as the "show your work" idea. >>>> >>>> There is an idea called the "writer monad" [0], usually >>>> implemented to facilitate logging. We can exploit this >>>> idea to provide "show your work" capability. Each function >>>> can provide this information inside the monad enabling the >>>> question to be answered at any time. >>>> >>>> For those unfamiliar with the monad idea, the best explanation >>>> I've found is this video [1]. >>>> >>>> Tim >>>> >>>> [0] Deriving the writer monad from first principles >>>> https://williamyaoh.com/posts/2020-07-26-deriving-writer-monad.html >>>> >>>> [1] The Absolute Best Intro to Monads for Software Engineers >>>> https://www.youtube.com/watch?v=3DC2w45qRc3aU >>>> >>>> On Mon, Dec 13, 2021 at 12:30 AM Tim Daly wrote: >>>> >>>>> ...(snip)... >>>>> >>>>> Common Lisp has an "open compiler". That allows the ability >>>>> to deeply modify compiler behavior using compiler macros >>>>> and macros in general. CLOS takes advantage of this to add >>>>> typed behavior into the compiler in a way that ALLOWS strict >>>>> typing such as found in constructive type theory and ML. >>>>> Judgments, ala Crary, are front-and-center. >>>>> >>>>> Whether you USE the discipline afforded is the real question. >>>>> >>>>> Indeed, the Axiom research struggle is essentially one of how >>>>> to have a disciplined use of first-class dependent types. The >>>>> struggle raises issues of, for example, compiling a dependent >>>>> type whose argument is recursive in the compiled type. Since >>>>> the new type is first-class it can be constructed at what you >>>>> improperly call "run-time". However, it appears that the recursive >>>>> type may have to call the compiler at each recursion to generate >>>>> the next step since in some cases it cannot generate "closed code". >>>>> >>>>> I am embedding proofs (in LEAN language) into the type >>>>> hierarchy so that theorems, which depend on the type hierarchy, >>>>> are correctly inherited. The compiler has to check the proofs of >>>>> functions >>>>> at compile time using these. Hacking up nonsense just won't cut it. >>>>> Think >>>>> of the problem of embedding LEAN proofs in ML or ML in LEAN. >>>>> (Actually, Jeremy Avigad might find that research interesting.) >>>>> >>>>> So Matrix(3,3,Float) has inverses (assuming Float is a >>>>> field (cough)). The type inherits this theorem and proofs of >>>>> functions can use this. But Matrix(3,4,Integer) does not have >>>>> inverses so the proofs cannot use this. The type hierarchy has >>>>> to ensure that the proper theorems get inherited. >>>>> >>>>> Making proof technology work at compile time is hard. >>>>> (Worse yet, LEAN is a moving target. Sigh.) >>>>> >>>>> >>>>> >>>>> On Thu, Nov 25, 2021 at 9:43 AM Tim Daly wrote: >>>>> >>>>>> >>>>>> As you know I've been re-architecting Axiom to use first class >>>>>> dependent types and proving the algorithms correct. For example, >>>>>> the GCD of natural numbers or the GCD of polynomials. >>>>>> >>>>>> The idea involves "boxing up" the proof with the algorithm (aka >>>>>> proof carrying code) in the ELF file (under a crypto hash so it >>>>>> can't be changed). >>>>>> >>>>>> Once the code is running on the CPU, the proof is run in parallel >>>>>> on the field programmable gate array (FPGA). Intel data center >>>>>> servers have CPUs with built-in FPGAs these days. >>>>>> >>>>>> There is a bit of a disconnect, though. The GCD code is compiled >>>>>> machine code but the proof is LEAN-level. >>>>>> >>>>>> What would be ideal is if the compiler not only compiled the GCD >>>>>> code to machine code, it also compiled the proof to "machine code". >>>>>> That is, for each machine instruction, the FPGA proof checker >>>>>> would ensure that the proof was not violated at the individual >>>>>> instruction level. >>>>>> >>>>>> What does it mean to "compile a proof to the machine code level"? >>>>>> >>>>>> The Milawa effort (Myre14.pdf) does incremental proofs in layers. >>>>>> To quote from the article [0]: >>>>>> >>>>>> We begin with a simple proof checker, call it A, which is short >>>>>> enough to verify by the ``social process'' of mathematics -- and >>>>>> more recently with a theorem prover for a more expressive logic. >>>>>> >>>>>> We then develop a series of increasingly powerful proof checkers, >>>>>> call the B, C, D, and so on. We show each of these programs only >>>>>> accepts the same formulas as A, using A to verify B, and B to >>>>>> verify >>>>>> C, and so on. Then, since we trust A, and A says B is trustworthy= , >>>>>> we >>>>>> can trust B. Then, since we trust B, and B says C is trustworthy, >>>>>> we >>>>>> can trust C. >>>>>> >>>>>> This gives a technique for "compiling the proof" down the the machin= e >>>>>> code level. Ideally, the compiler would have judgments for each step >>>>>> of >>>>>> the compilation so that each compile step has a justification. I don= 't >>>>>> know of any compiler that does this yet. (References welcome). >>>>>> >>>>>> At the machine code level, there are techniques that would allow >>>>>> the FPGA proof to "step in sequence" with the executing code. >>>>>> Some work has been done on using "Hoare Logic for Realistically >>>>>> Modelled Machine Code" (paper attached, Myre07a.pdf), >>>>>> "Decompilation into Logic -- Improved (Myre12a.pdf). >>>>>> >>>>>> So the game is to construct a GCD over some type (Nats, Polys, etc. >>>>>> Axiom has 22), compile the dependent type GCD to machine code. >>>>>> In parallel, the proof of the code is compiled to machine code. The >>>>>> pair is sent to the CPU/FPGA and, while the algorithm runs, the FPGA >>>>>> ensures the proof is not violated, instruction by instruction. >>>>>> >>>>>> (I'm ignoring machine architecture issues such pipelining, >>>>>> out-of-order, >>>>>> branch prediction, and other machine-level things to ponder. I'm >>>>>> looking >>>>>> at the RISC-V Verilog details by various people to understand better >>>>>> but >>>>>> it is still a "misty fog" for me.) >>>>>> >>>>>> The result is proven code "down to the metal". >>>>>> >>>>>> Tim >>>>>> >>>>>> >>>>>> >>>>>> [0] >>>>>> https://www.cs.utexas.edu/users/moore/acl2/manuals/current/manual/in= dex-seo.php/ACL2____MILAWA >>>>>> >>>>>> On Thu, Nov 25, 2021 at 6:05 AM Tim Daly wrote: >>>>>> >>>>>>> >>>>>>> As you know I've been re-architecting Axiom to use first class >>>>>>> dependent types and proving the algorithms correct. For example, >>>>>>> the GCD of natural numbers or the GCD of polynomials. >>>>>>> >>>>>>> The idea involves "boxing up" the proof with the algorithm (aka >>>>>>> proof carrying code) in the ELF file (under a crypto hash so it >>>>>>> can't be changed). >>>>>>> >>>>>>> Once the code is running on the CPU, the proof is run in parallel >>>>>>> on the field programmable gate array (FPGA). Intel data center >>>>>>> servers have CPUs with built-in FPGAs these days. >>>>>>> >>>>>>> There is a bit of a disconnect, though. The GCD code is compiled >>>>>>> machine code but the proof is LEAN-level. >>>>>>> >>>>>>> What would be ideal is if the compiler not only compiled the GCD >>>>>>> code to machine code, it also compiled the proof to "machine code". >>>>>>> That is, for each machine instruction, the FPGA proof checker >>>>>>> would ensure that the proof was not violated at the individual >>>>>>> instruction level. >>>>>>> >>>>>>> What does it mean to "compile a proof to the machine code level"? >>>>>>> >>>>>>> The Milawa effort (Myre14.pdf) does incremental proofs in layers. >>>>>>> To quote from the article [0]: >>>>>>> >>>>>>> We begin with a simple proof checker, call it A, which is short >>>>>>> enough to verify by the ``social process'' of mathematics -- and >>>>>>> more recently with a theorem prover for a more expressive logic. >>>>>>> >>>>>>> We then develop a series of increasingly powerful proof checkers= , >>>>>>> call the B, C, D, and so on. We show each of these programs only >>>>>>> accepts the same formulas as A, using A to verify B, and B to >>>>>>> verify >>>>>>> C, and so on. Then, since we trust A, and A says B is >>>>>>> trustworthy, we >>>>>>> can trust B. Then, since we trust B, and B says C is trustworthy= , >>>>>>> we >>>>>>> can trust C. >>>>>>> >>>>>>> This gives a technique for "compiling the proof" down the the machi= ne >>>>>>> code level. Ideally, the compiler would have judgments for each ste= p >>>>>>> of >>>>>>> the compilation so that each compile step has a justification. I >>>>>>> don't >>>>>>> know of any compiler that does this yet. (References welcome). >>>>>>> >>>>>>> At the machine code level, there are techniques that would allow >>>>>>> the FPGA proof to "step in sequence" with the executing code. >>>>>>> Some work has been done on using "Hoare Logic for Realistically >>>>>>> Modelled Machine Code" (paper attached, Myre07a.pdf), >>>>>>> "Decompilation into Logic -- Improved (Myre12a.pdf). >>>>>>> >>>>>>> So the game is to construct a GCD over some type (Nats, Polys, etc. >>>>>>> Axiom has 22), compile the dependent type GCD to machine code. >>>>>>> In parallel, the proof of the code is compiled to machine code. The >>>>>>> pair is sent to the CPU/FPGA and, while the algorithm runs, the FPG= A >>>>>>> ensures the proof is not violated, instruction by instruction. >>>>>>> >>>>>>> (I'm ignoring machine architecture issues such pipelining, >>>>>>> out-of-order, >>>>>>> branch prediction, and other machine-level things to ponder. I'm >>>>>>> looking >>>>>>> at the RISC-V Verilog details by various people to understand bette= r >>>>>>> but >>>>>>> it is still a "misty fog" for me.) >>>>>>> >>>>>>> The result is proven code "down to the metal". >>>>>>> >>>>>>> Tim >>>>>>> >>>>>>> >>>>>>> >>>>>>> [0] >>>>>>> https://www.cs.utexas.edu/users/moore/acl2/manuals/current/manual/i= ndex-seo.php/ACL2____MILAWA >>>>>>> >>>>>>> On Sat, Nov 13, 2021 at 5:28 PM Tim Daly wrote= : >>>>>>> >>>>>>>> Full support for general, first-class dependent types requires >>>>>>>> some changes to the Axiom design. That implies some language >>>>>>>> design questions. >>>>>>>> >>>>>>>> Given that mathematics is such a general subject with a lot of >>>>>>>> "local" notation and ideas (witness logical judgment notation) >>>>>>>> careful thought is needed to design a language that is able to >>>>>>>> handle a wide range. >>>>>>>> >>>>>>>> Normally language design is a two-level process. The language >>>>>>>> designer creates a language and then an implementation. Various >>>>>>>> design choices affect the final language. >>>>>>>> >>>>>>>> There is "The Metaobject Protocol" (MOP) >>>>>>>> >>>>>>>> https://www.amazon.com/Art-Metaobject-Protocol-Gregor-Kiczales/dp/= 0262610744 >>>>>>>> which encourages a three-level process. The language designer >>>>>>>> works at a Metalevel to design a family of languages, then the >>>>>>>> language specializations, then the implementation. A MOP design >>>>>>>> allows the language user to optimize the language to their problem= . >>>>>>>> >>>>>>>> A simple paper on the subject is "Metaobject Protocols" >>>>>>>> https://users.cs.duke.edu/~vahdat/ps/mop.pdf >>>>>>>> >>>>>>>> Tim >>>>>>>> >>>>>>>> >>>>>>>> On Mon, Oct 25, 2021 at 7:42 PM Tim Daly >>>>>>>> wrote: >>>>>>>> >>>>>>>>> I have a separate thread of research on Self-Replicating Systems >>>>>>>>> (ref: Kinematics of Self Reproducing Machines >>>>>>>>> http://www.molecularassembler.com/KSRM.htm) >>>>>>>>> >>>>>>>>> which led to watching "Strange Dreams of Stranger Loops" by Will >>>>>>>>> Byrd >>>>>>>>> https://www.youtube.com/watch?v=3DAffW-7ika0E >>>>>>>>> >>>>>>>>> Will referenced a PhD Thesis by Jon Doyle >>>>>>>>> "A Model for Deliberation, Action, and Introspection" >>>>>>>>> >>>>>>>>> I also read the thesis by J.C.G. Sturdy >>>>>>>>> "A Lisp through the Looking Glass" >>>>>>>>> >>>>>>>>> Self-replication requires the ability to manipulate your own >>>>>>>>> representation in such a way that changes to that representation >>>>>>>>> will change behavior. >>>>>>>>> >>>>>>>>> This leads to two thoughts in the SANE research. >>>>>>>>> >>>>>>>>> First, "Declarative Representation". That is, most of the things >>>>>>>>> about the representation should be declarative rather than >>>>>>>>> procedural. Applying this idea as much as possible makes it >>>>>>>>> easier to understand and manipulate. >>>>>>>>> >>>>>>>>> Second, "Explicit Call Stack". Function calls form an implicit >>>>>>>>> call stack. This can usually be displayed in a running lisp syste= m. >>>>>>>>> However, having the call stack explicitly available would mean >>>>>>>>> that a system could "introspect" at the first-class level. >>>>>>>>> >>>>>>>>> These two ideas would make it easy, for example, to let the >>>>>>>>> system "show the work". One of the normal complaints is that >>>>>>>>> a system presents an answer but there is no way to know how >>>>>>>>> that answer was derived. These two ideas make it possible to >>>>>>>>> understand, display, and even post-answer manipulate >>>>>>>>> the intermediate steps. >>>>>>>>> >>>>>>>>> Having the intermediate steps also allows proofs to be >>>>>>>>> inserted in a step-by-step fashion. This aids the effort to >>>>>>>>> have proofs run in parallel with computation at the hardware >>>>>>>>> level. >>>>>>>>> >>>>>>>>> Tim >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> On Thu, Oct 21, 2021 at 9:50 AM Tim Daly >>>>>>>>> wrote: >>>>>>>>> >>>>>>>>>> So the current struggle involves the categories in Axiom. >>>>>>>>>> >>>>>>>>>> The categories and domains constructed using categories >>>>>>>>>> are dependent types. When are dependent types "equal"? >>>>>>>>>> Well, hummmm, that depends on the arguments to the >>>>>>>>>> constructor. >>>>>>>>>> >>>>>>>>>> But in order to decide(?) equality we have to evaluate >>>>>>>>>> the arguments (which themselves can be dependent types). >>>>>>>>>> Indeed, we may, and in general, we must evaluate the >>>>>>>>>> arguments at compile time (well, "construction time" as >>>>>>>>>> there isn't really a compiler / interpreter separation anymore.) >>>>>>>>>> >>>>>>>>>> That raises the question of what "equality" means. This >>>>>>>>>> is not simply a "set equality" relation. It falls into the >>>>>>>>>> infinite-groupoid of homotopy type theory. In general >>>>>>>>>> it appears that deciding category / domain equivalence >>>>>>>>>> might force us to climb the type hierarchy. >>>>>>>>>> >>>>>>>>>> Beyond that, there is the question of "which proof" >>>>>>>>>> applies to the resulting object. Proofs depend on their >>>>>>>>>> assumptions which might be different for different >>>>>>>>>> constructions. As yet I have no clue how to "index" >>>>>>>>>> proofs based on their assumptions, nor how to >>>>>>>>>> connect these assumptions to the groupoid structure. >>>>>>>>>> >>>>>>>>>> My brain hurts. >>>>>>>>>> >>>>>>>>>> Tim >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> On Mon, Oct 18, 2021 at 2:00 AM Tim Daly >>>>>>>>>> wrote: >>>>>>>>>> >>>>>>>>>>> "Birthing Computational Mathematics" >>>>>>>>>>> >>>>>>>>>>> The Axiom SANE project is difficult at a very fundamental >>>>>>>>>>> level. The title "SANE" was chosen due to the various >>>>>>>>>>> words found in a thesuarus... "rational", "coherent", >>>>>>>>>>> "judicious" and "sound". >>>>>>>>>>> >>>>>>>>>>> These are very high level, amorphous ideas. But so is >>>>>>>>>>> the design of SANE. Breaking away from tradition in >>>>>>>>>>> computer algebra, type theory, and proof assistants >>>>>>>>>>> is very difficult. Ideas tend to fall into standard jargon >>>>>>>>>>> which limits both the frame of thinking (e.g. dependent >>>>>>>>>>> types) and the content (e.g. notation). >>>>>>>>>>> >>>>>>>>>>> Questioning both frame and content is very difficult. >>>>>>>>>>> It is hard to even recognize when they are accepted >>>>>>>>>>> "by default" rather than "by choice". What does the idea >>>>>>>>>>> "power tools" mean in a primitive, hand labor culture? >>>>>>>>>>> >>>>>>>>>>> Christopher Alexander [0] addresses this problem in >>>>>>>>>>> a lot of his writing. Specifically, in his book "Notes on >>>>>>>>>>> the Synthesis of Form", in his chapter 5 "The Selfconsious >>>>>>>>>>> Process", he addresses this problem directly. This is a >>>>>>>>>>> "must read" book. >>>>>>>>>>> >>>>>>>>>>> Unlike building design and contruction, however, there >>>>>>>>>>> are almost no constraints to use as guides. Alexander >>>>>>>>>>> quotes Plato's Phaedrus: >>>>>>>>>>> >>>>>>>>>>> "First, the taking in of scattered particulars under >>>>>>>>>>> one Idea, so that everyone understands what is being >>>>>>>>>>> talked about ... Second, the separation of the Idea >>>>>>>>>>> into parts, by dividing it at the joints, as nature >>>>>>>>>>> directs, not breaking any limb in half as a bad >>>>>>>>>>> carver might." >>>>>>>>>>> >>>>>>>>>>> Lisp, which has been called "clay for the mind" can >>>>>>>>>>> build virtually anything that can be thought. The >>>>>>>>>>> "joints" are also "of one's choosing" so one is >>>>>>>>>>> both carver and "nature". >>>>>>>>>>> >>>>>>>>>>> Clearly the problem is no longer "the tools". >>>>>>>>>>> *I* am the problem constraining the solution. >>>>>>>>>>> Birthing this "new thing" is slow, difficult, and >>>>>>>>>>> uncertain at best. >>>>>>>>>>> >>>>>>>>>>> Tim >>>>>>>>>>> >>>>>>>>>>> [0] Alexander, Christopher "Notes on the Synthesis >>>>>>>>>>> of Form" Harvard University Press 1964 >>>>>>>>>>> ISBN 0-674-62751-2 >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> On Sun, Oct 10, 2021 at 4:40 PM Tim Daly >>>>>>>>>>> wrote: >>>>>>>>>>> >>>>>>>>>>>> Re: writing a paper... I'm not connected to Academia >>>>>>>>>>>> so anything I'd write would never make it into print. >>>>>>>>>>>> >>>>>>>>>>>> "Language level parsing" is still a long way off. The talk >>>>>>>>>>>> by Guy Steele [2] highlights some of the problems we >>>>>>>>>>>> currently face using mathematical metanotation. >>>>>>>>>>>> >>>>>>>>>>>> For example, a professor I know at CCNY (City College >>>>>>>>>>>> of New York) didn't understand Platzer's "funny >>>>>>>>>>>> fraction notation" (proof judgements) despite being >>>>>>>>>>>> an expert in Platzer's differential equations area. >>>>>>>>>>>> >>>>>>>>>>>> Notation matters and is not widely common. >>>>>>>>>>>> >>>>>>>>>>>> I spoke to Professor Black (in LTI) about using natural >>>>>>>>>>>> language in the limited task of a human-robot cooperation >>>>>>>>>>>> in changing a car tire. I looked at the current machine >>>>>>>>>>>> learning efforts. They are no where near anything but >>>>>>>>>>>> toy systems, taking too long to train and are too fragile. >>>>>>>>>>>> >>>>>>>>>>>> Instead I ended up using a combination of AIML [3] >>>>>>>>>>>> (Artificial Intelligence Markup Language), the ALICE >>>>>>>>>>>> Chatbot [4], Forgy's OPS5 rule based program [5], >>>>>>>>>>>> and Fahlman's SCONE [6] knowledge base. It was >>>>>>>>>>>> much less fragile in my limited domain problem. >>>>>>>>>>>> >>>>>>>>>>>> I have no idea how to extend any system to deal with >>>>>>>>>>>> even undergraduate mathematics parsing. >>>>>>>>>>>> >>>>>>>>>>>> Nor do I have any idea how I would embed LEAN >>>>>>>>>>>> knowledge into a SCONE database, although I >>>>>>>>>>>> think the combination would be useful and interesting. >>>>>>>>>>>> >>>>>>>>>>>> I do believe that, in the limited area of computational >>>>>>>>>>>> mathematics, we are capable of building robust, proven >>>>>>>>>>>> systems that are quite general and extensible. As you >>>>>>>>>>>> might have guessed I've given it a lot of thought over >>>>>>>>>>>> the years :-) >>>>>>>>>>>> >>>>>>>>>>>> A mathematical language seems to need >6 components >>>>>>>>>>>> >>>>>>>>>>>> 1) We need some sort of a specification language, possibly >>>>>>>>>>>> somewhat 'propositional' that introduces the assumptions >>>>>>>>>>>> you mentioned (ref. your discussion of numbers being >>>>>>>>>>>> abstract and ref. your discussion of relevant choice of >>>>>>>>>>>> assumptions related to a problem). >>>>>>>>>>>> >>>>>>>>>>>> This is starting to show up in the hardware area (e.g. >>>>>>>>>>>> Lamport's TLC[0]) >>>>>>>>>>>> >>>>>>>>>>>> Of course, specifications relate to proving programs >>>>>>>>>>>> and, as you recall, I got a cold reception from the >>>>>>>>>>>> LEAN community about using LEAN for program proofs. >>>>>>>>>>>> >>>>>>>>>>>> 2) We need "scaffolding". That is, we need a theory >>>>>>>>>>>> that can be reduced to some implementable form >>>>>>>>>>>> that provides concept-level structure. >>>>>>>>>>>> >>>>>>>>>>>> Axiom uses group theory for this. Axiom's "category" >>>>>>>>>>>> structure has "Category" things like Ring. Claiming >>>>>>>>>>>> to be a Ring brings in a lot of "Signatures" of functions >>>>>>>>>>>> you have to implement to properly be a Ring. >>>>>>>>>>>> >>>>>>>>>>>> Scaffolding provides a firm mathematical basis for >>>>>>>>>>>> design. It provides a link between the concept of a >>>>>>>>>>>> Ring and the expectations you can assume when >>>>>>>>>>>> you claim your "Domain" "is a Ring". Category >>>>>>>>>>>> theory might provide similar structural scaffolding >>>>>>>>>>>> (eventually... I'm still working on that thought garden) >>>>>>>>>>>> >>>>>>>>>>>> LEAN ought to have a textbook(s?) that structures >>>>>>>>>>>> the world around some form of mathematics. It isn't >>>>>>>>>>>> sufficient to say "undergraduate math" is the goal. >>>>>>>>>>>> There needs to be some coherent organization so >>>>>>>>>>>> people can bring ideas like Group Theory to the >>>>>>>>>>>> organization. Which brings me to ... >>>>>>>>>>>> >>>>>>>>>>>> 3) We need "spreading". That is, we need to take >>>>>>>>>>>> the various definitions and theorems in LEAN and >>>>>>>>>>>> place them in their proper place in the scaffold. >>>>>>>>>>>> >>>>>>>>>>>> For example, the Ring category needs the definitions >>>>>>>>>>>> and theorems for a Ring included in the code for the >>>>>>>>>>>> Ring category. Similarly, the Commutative category >>>>>>>>>>>> needs the definitions and theorems that underlie >>>>>>>>>>>> "commutative" included in the code. >>>>>>>>>>>> >>>>>>>>>>>> That way, when you claim to be a "Commutative Ring" >>>>>>>>>>>> you get both sets of definitions and theorems. That is, >>>>>>>>>>>> the inheritance mechanism will collect up all of the >>>>>>>>>>>> definitions and theorems and make them available >>>>>>>>>>>> for proofs. >>>>>>>>>>>> >>>>>>>>>>>> I am looking at LEAN's definitions and theorems with >>>>>>>>>>>> an eye to "spreading" them into the group scaffold of >>>>>>>>>>>> Axiom. >>>>>>>>>>>> >>>>>>>>>>>> 4) We need "carriers" (Axiom calls them representations, >>>>>>>>>>>> aka "REP"). REPs allow data structures to be defined >>>>>>>>>>>> independent of the implementation. >>>>>>>>>>>> >>>>>>>>>>>> For example, Axiom can construct Polynomials that >>>>>>>>>>>> have their coefficients in various forms of representation. >>>>>>>>>>>> You can define "dense" (all coefficients in a list), >>>>>>>>>>>> "sparse" (only non-zero coefficients), "recursive", etc. >>>>>>>>>>>> >>>>>>>>>>>> A "dense polynomial" and a "sparse polynomial" work >>>>>>>>>>>> exactly the same way as far as the user is concerned. >>>>>>>>>>>> They both implement the same set of functions. There >>>>>>>>>>>> is only a difference of representation for efficiency and >>>>>>>>>>>> this only affects the implementation of the functions, >>>>>>>>>>>> not their use. >>>>>>>>>>>> >>>>>>>>>>>> Axiom "got this wrong" because it didn't sufficiently >>>>>>>>>>>> separate the REP from the "Domain". I plan to fix this. >>>>>>>>>>>> >>>>>>>>>>>> LEAN ought to have a "data structures" subtree that >>>>>>>>>>>> has all of the definitions and axioms for all of the >>>>>>>>>>>> existing data structures (e.g. Red-Black trees). This >>>>>>>>>>>> would be a good undergraduate project. >>>>>>>>>>>> >>>>>>>>>>>> 5) We need "Domains" (in Axiom speak). That is, we >>>>>>>>>>>> need a box that holds all of the functions that implement >>>>>>>>>>>> a "Domain". For example, a "Polynomial Domain" would >>>>>>>>>>>> hold all of the functions for manipulating polynomials >>>>>>>>>>>> (e.g polynomial multiplication). The "Domain" box >>>>>>>>>>>> is a dependent type that: >>>>>>>>>>>> >>>>>>>>>>>> A) has an argument list of "Categories" that this "Domain" >>>>>>>>>>>> box inherits. Thus, the "Integer Domain" inherits >>>>>>>>>>>> the definitions and axioms from "Commutative" >>>>>>>>>>>> >>>>>>>>>>>> Functions in the "Domain" box can now assume >>>>>>>>>>>> and use the properties of being commutative. Proofs >>>>>>>>>>>> of functions in this domain can use the definitions >>>>>>>>>>>> and proofs about being commutative. >>>>>>>>>>>> >>>>>>>>>>>> B) contains an argument that specifies the "REP" >>>>>>>>>>>> (aka, the carrier). That way you get all of the >>>>>>>>>>>> functions associated with the data structure >>>>>>>>>>>> available for use in the implementation. >>>>>>>>>>>> >>>>>>>>>>>> Functions in the Domain box can use all of >>>>>>>>>>>> the definitions and axioms about the representation >>>>>>>>>>>> (e.g. NonNegativeIntegers are always positive) >>>>>>>>>>>> >>>>>>>>>>>> C) contains local "spread" definitions and axioms >>>>>>>>>>>> that can be used in function proofs. >>>>>>>>>>>> >>>>>>>>>>>> For example, a "Square Matrix" domain would >>>>>>>>>>>> have local axioms that state that the matrix is >>>>>>>>>>>> always square. Thus, functions in that box could >>>>>>>>>>>> use these additional definitions and axioms in >>>>>>>>>>>> function proofs. >>>>>>>>>>>> >>>>>>>>>>>> D) contains local state. A "Square Matrix" domain >>>>>>>>>>>> would be constructed as a dependent type that >>>>>>>>>>>> specified the size of the square (e.g. a 2x2 >>>>>>>>>>>> matrix would have '2' as a dependent parameter. >>>>>>>>>>>> >>>>>>>>>>>> E) contains implementations of inherited functions. >>>>>>>>>>>> >>>>>>>>>>>> A "Category" could have a signature for a GCD >>>>>>>>>>>> function and the "Category" could have a default >>>>>>>>>>>> implementation. However, the "Domain" could >>>>>>>>>>>> have a locally more efficient implementation which >>>>>>>>>>>> overrides the inherited implementation. >>>>>>>>>>>> >>>>>>>>>>>> Axiom has about 20 GCD implementations that >>>>>>>>>>>> differ locally from the default in the category. They >>>>>>>>>>>> use properties known locally to be more efficient. >>>>>>>>>>>> >>>>>>>>>>>> F) contains local function signatures. >>>>>>>>>>>> >>>>>>>>>>>> A "Domain" gives the user more and more unique >>>>>>>>>>>> functions. The signature have associated >>>>>>>>>>>> "pre- and post- conditions" that can be used >>>>>>>>>>>> as assumptions in the function proofs. >>>>>>>>>>>> >>>>>>>>>>>> Some of the user-available functions are only >>>>>>>>>>>> visible if the dependent type would allow them >>>>>>>>>>>> to exist. For example, a general Matrix domain >>>>>>>>>>>> would have fewer user functions that a Square >>>>>>>>>>>> Matrix domain. >>>>>>>>>>>> >>>>>>>>>>>> In addition, local "helper" functions need their >>>>>>>>>>>> own signatures that are not user visible. >>>>>>>>>>>> >>>>>>>>>>>> G) the function implementation for each signature. >>>>>>>>>>>> >>>>>>>>>>>> This is obviously where all the magic happens >>>>>>>>>>>> >>>>>>>>>>>> H) the proof of each function. >>>>>>>>>>>> >>>>>>>>>>>> This is where I'm using LEAN. >>>>>>>>>>>> >>>>>>>>>>>> Every function has a proof. That proof can use >>>>>>>>>>>> all of the definitions and axioms inherited from >>>>>>>>>>>> the "Category", "Representation", the "Domain >>>>>>>>>>>> Local", and the signature pre- and post- >>>>>>>>>>>> conditions. >>>>>>>>>>>> >>>>>>>>>>>> I) literature links. Algorithms must contain a link >>>>>>>>>>>> to at least one literature reference. Of course, >>>>>>>>>>>> since everything I do is a Literate Program >>>>>>>>>>>> this is obviously required. Knuth said so :-) >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> LEAN ought to have "books" or "pamphlets" that >>>>>>>>>>>> bring together all of this information for a domain >>>>>>>>>>>> such as Square Matrices. That way a user can >>>>>>>>>>>> find all of the related ideas, available functions, >>>>>>>>>>>> and their corresponding proofs in one place. >>>>>>>>>>>> >>>>>>>>>>>> 6) User level presentation. >>>>>>>>>>>> >>>>>>>>>>>> This is where the systems can differ significantly. >>>>>>>>>>>> Axiom and LEAN both have GCD but they use >>>>>>>>>>>> that for different purposes. >>>>>>>>>>>> >>>>>>>>>>>> I'm trying to connect LEAN's GCD and Axiom's GCD >>>>>>>>>>>> so there is a "computational mathematics" idea that >>>>>>>>>>>> allows the user to connect proofs and implementations. >>>>>>>>>>>> >>>>>>>>>>>> 7) Trust >>>>>>>>>>>> >>>>>>>>>>>> Unlike everything else, computational mathematics >>>>>>>>>>>> can have proven code that gives various guarantees. >>>>>>>>>>>> >>>>>>>>>>>> I have been working on this aspect for a while. >>>>>>>>>>>> I refer to it as trust "down to the metal" The idea is >>>>>>>>>>>> that a proof of the GCD function and the implementation >>>>>>>>>>>> of the GCD function get packaged into the ELF format. >>>>>>>>>>>> (proof carrying code). When the GCD algorithm executes >>>>>>>>>>>> on the CPU, the GCD proof is run through the LEAN >>>>>>>>>>>> proof checker on an FPGA in parallel. >>>>>>>>>>>> >>>>>>>>>>>> (I just recently got a PYNQ Xilinx board [1] with a CPU >>>>>>>>>>>> and FPGA together. I'm trying to implement the LEAN >>>>>>>>>>>> proof checker on the FPGA). >>>>>>>>>>>> >>>>>>>>>>>> We are on the cusp of a revolution in computational >>>>>>>>>>>> mathematics. But the two pillars (proof and computer >>>>>>>>>>>> algebra) need to get know each other. >>>>>>>>>>>> >>>>>>>>>>>> Tim >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> [0] Lamport, Leslie "Chapter on TLA+" >>>>>>>>>>>> in "Software Specification Methods" >>>>>>>>>>>> https://www.springer.com/gp/book/9781852333539 >>>>>>>>>>>> (I no longer have CMU library access or I'd send you >>>>>>>>>>>> the book PDF) >>>>>>>>>>>> >>>>>>>>>>>> [1] https://www.tul.com.tw/productspynq-z2.html >>>>>>>>>>>> >>>>>>>>>>>> [2] https://www.youtube.com/watch?v=3DdCuZkaaou0Q >>>>>>>>>>>> >>>>>>>>>>>> [3] "ARTIFICIAL INTELLIGENCE MARKUP LANGUAGE" >>>>>>>>>>>> https://arxiv.org/pdf/1307.3091.pdf >>>>>>>>>>>> >>>>>>>>>>>> [4] ALICE Chatbot >>>>>>>>>>>> >>>>>>>>>>>> http://www.scielo.org.mx/pdf/cys/v19n4/1405-5546-cys-19-04-006= 25.pdf >>>>>>>>>>>> >>>>>>>>>>>> [5] OPS5 User Manual >>>>>>>>>>>> >>>>>>>>>>>> https://kilthub.cmu.edu/articles/journal_contribution/OPS5_use= r_s_manual/6608090/1 >>>>>>>>>>>> >>>>>>>>>>>> [6] Scott Fahlman "SCONE" >>>>>>>>>>>> http://www.cs.cmu.edu/~sef/scone/ >>>>>>>>>>>> >>>>>>>>>>>> On 9/27/21, Tim Daly wrote: >>>>>>>>>>>> > I have tried to maintain a list of names of people who have >>>>>>>>>>>> > helped Axiom, going all the way back to the pre-Scratchpad >>>>>>>>>>>> > days. The names are listed at the beginning of each book. >>>>>>>>>>>> > I also maintain a bibliography of publications I've read or >>>>>>>>>>>> > that have had an indirect influence on Axiom. >>>>>>>>>>>> > >>>>>>>>>>>> > Credit is "the coin of the realm". It is easy to share and >>>>>>>>>>>> wrong >>>>>>>>>>>> > to ignore. It is especially damaging to those in Academia wh= o >>>>>>>>>>>> > are affected by credit and citations in publications. >>>>>>>>>>>> > >>>>>>>>>>>> > Apparently I'm not the only person who feels that way. The A= CM >>>>>>>>>>>> > Turing award seems to have ignored a lot of work: >>>>>>>>>>>> > >>>>>>>>>>>> > Scientific Integrity, the 2021 Turing Lecture, and the 2018 >>>>>>>>>>>> Turing >>>>>>>>>>>> > Award for Deep Learning >>>>>>>>>>>> > >>>>>>>>>>>> https://people.idsia.ch/~juergen/scientific-integrity-turing-a= ward-deep-learning.html >>>>>>>>>>>> > >>>>>>>>>>>> > I worked on an AI problem at IBM Research called Ketazolam. >>>>>>>>>>>> > (https://en.wikipedia.org/wiki/Ketazolam). The idea was to >>>>>>>>>>>> recognize >>>>>>>>>>>> > and associated 3D chemical drawings with their drug >>>>>>>>>>>> counterparts. >>>>>>>>>>>> > I used Rumelhart, and McClelland's books. These books >>>>>>>>>>>> contained >>>>>>>>>>>> > quite a few ideas that seem to be "new and innovative" among >>>>>>>>>>>> the >>>>>>>>>>>> > machine learning crowd... but the books are from 1987. I >>>>>>>>>>>> don't believe >>>>>>>>>>>> > I've seen these books mentioned in any recent bibliography. >>>>>>>>>>>> > >>>>>>>>>>>> https://mitpress.mit.edu/books/parallel-distributed-processing= -volume-1 >>>>>>>>>>>> > >>>>>>>>>>>> > >>>>>>>>>>>> > >>>>>>>>>>>> > >>>>>>>>>>>> > On 9/27/21, Tim Daly wrote: >>>>>>>>>>>> >> Greg Wilson asked "How Reliable is Scientific Software?" >>>>>>>>>>>> >> >>>>>>>>>>>> https://neverworkintheory.org/2021/09/25/how-reliable-is-scien= tific-software.html >>>>>>>>>>>> >> >>>>>>>>>>>> >> which is a really interesting read. For example" >>>>>>>>>>>> >> >>>>>>>>>>>> >> [Hatton1994], is now a quarter of a century old, but its >>>>>>>>>>>> conclusions >>>>>>>>>>>> >> are still fresh. The authors fed the same data into nine >>>>>>>>>>>> commercial >>>>>>>>>>>> >> geophysical software packages and compared the results; the= y >>>>>>>>>>>> found >>>>>>>>>>>> >> that, "numerical disagreement grows at around the rate of 1= % >>>>>>>>>>>> in >>>>>>>>>>>> >> average absolute difference per 4000 fines of implemented >>>>>>>>>>>> code, and, >>>>>>>>>>>> >> even worse, the nature of the disagreement is nonrandom" >>>>>>>>>>>> (i.e., the >>>>>>>>>>>> >> authors of different packages make similar mistakes). >>>>>>>>>>>> >> >>>>>>>>>>>> >> >>>>>>>>>>>> >> On 9/26/21, Tim Daly wrote: >>>>>>>>>>>> >>> I should note that the lastest board I've just unboxed >>>>>>>>>>>> >>> (a PYNQ-Z2) is a Zynq Z-7020 chip from Xilinx (AMD). >>>>>>>>>>>> >>> >>>>>>>>>>>> >>> What makes it interesting is that it contains 2 hard >>>>>>>>>>>> >>> core processors and an FPGA, connected by 9 paths >>>>>>>>>>>> >>> for communication. The processors can be run >>>>>>>>>>>> >>> independently so there is the possibility of a parallel >>>>>>>>>>>> >>> version of some Axiom algorithms (assuming I had >>>>>>>>>>>> >>> the time, which I don't). >>>>>>>>>>>> >>> >>>>>>>>>>>> >>> Previously either the hard (physical) processor was >>>>>>>>>>>> >>> separate from the FPGA with minimal communication >>>>>>>>>>>> >>> or the soft core processor had to be created in the FPGA >>>>>>>>>>>> >>> and was much slower. >>>>>>>>>>>> >>> >>>>>>>>>>>> >>> Now the two have been combined in a single chip. >>>>>>>>>>>> >>> That means that my effort to run a proof checker on >>>>>>>>>>>> >>> the FPGA and the algorithm on the CPU just got to >>>>>>>>>>>> >>> the point where coordination is much easier. >>>>>>>>>>>> >>> >>>>>>>>>>>> >>> Now all I have to do is figure out how to program this >>>>>>>>>>>> >>> beast. >>>>>>>>>>>> >>> >>>>>>>>>>>> >>> There is no such thing as a simple job. >>>>>>>>>>>> >>> >>>>>>>>>>>> >>> Tim >>>>>>>>>>>> >>> >>>>>>>>>>>> >>> >>>>>>>>>>>> >>> On 9/26/21, Tim Daly wrote: >>>>>>>>>>>> >>>> I'm familiar with most of the traditional approaches >>>>>>>>>>>> >>>> like Theorema. The bibliography contains most of the >>>>>>>>>>>> >>>> more interesting sources. [0] >>>>>>>>>>>> >>>> >>>>>>>>>>>> >>>> There is a difference between traditional approaches to >>>>>>>>>>>> >>>> connecting computer algebra and proofs and my approach. >>>>>>>>>>>> >>>> >>>>>>>>>>>> >>>> Proving an algorithm, like the GCD, in Axiom is hard. >>>>>>>>>>>> >>>> There are many GCDs (e.g. NNI vs POLY) and there >>>>>>>>>>>> >>>> are theorems and proofs passed at runtime in the >>>>>>>>>>>> >>>> arguments of the newly constructed domains. This >>>>>>>>>>>> >>>> involves a lot of dependent type theory and issues of >>>>>>>>>>>> >>>> compile time / runtime argument evaluation. The issues >>>>>>>>>>>> >>>> that arise are difficult and still being debated in the >>>>>>>>>>>> type >>>>>>>>>>>> >>>> theory community. >>>>>>>>>>>> >>>> >>>>>>>>>>>> >>>> I am putting the definitions, theorems, and proofs (DTP) >>>>>>>>>>>> >>>> directly into the category/domain hierarchy. Each categor= y >>>>>>>>>>>> >>>> will have the DTP specific to it. That way a commutative >>>>>>>>>>>> >>>> domain will inherit a commutative theorem and a >>>>>>>>>>>> >>>> non-commutative domain will not. >>>>>>>>>>>> >>>> >>>>>>>>>>>> >>>> Each domain will have additional DTPs associated with >>>>>>>>>>>> >>>> the domain (e.g. NNI vs Integer) as well as any DTPs >>>>>>>>>>>> >>>> it inherits from the category hierarchy. Functions in the >>>>>>>>>>>> >>>> domain will have associated DTPs. >>>>>>>>>>>> >>>> >>>>>>>>>>>> >>>> A function to be proven will then inherit all of the >>>>>>>>>>>> relevant >>>>>>>>>>>> >>>> DTPs. The proof will be attached to the function and >>>>>>>>>>>> >>>> both will be sent to the hardware (proof-carrying code). >>>>>>>>>>>> >>>> >>>>>>>>>>>> >>>> The proof checker, running on a field programmable >>>>>>>>>>>> >>>> gate array (FPGA), will be checked at runtime in >>>>>>>>>>>> >>>> parallel with the algorithm running on the CPU >>>>>>>>>>>> >>>> (aka "trust down to the metal"). (Note that Intel >>>>>>>>>>>> >>>> and AMD have built CPU/FPGA combined chips, >>>>>>>>>>>> >>>> currently only available in the cloud.) >>>>>>>>>>>> >>>> >>>>>>>>>>>> >>>> >>>>>>>>>>>> >>>> >>>>>>>>>>>> >>>> I am (slowly) making progress on the research. >>>>>>>>>>>> >>>> >>>>>>>>>>>> >>>> I have the hardware and nearly have the proof >>>>>>>>>>>> >>>> checker from LEAN running on my FPGA. >>>>>>>>>>>> >>>> >>>>>>>>>>>> >>>> I'm in the process of spreading the DTPs from >>>>>>>>>>>> >>>> LEAN across the category/domain hierarchy. >>>>>>>>>>>> >>>> >>>>>>>>>>>> >>>> The current Axiom build extracts all of the functions >>>>>>>>>>>> >>>> but does not yet have the DTPs. >>>>>>>>>>>> >>>> >>>>>>>>>>>> >>>> I have to restructure the system, including the compiler >>>>>>>>>>>> >>>> and interpreter to parse and inherit the DTPs. I >>>>>>>>>>>> >>>> have some of that code but only some of the code >>>>>>>>>>>> >>>> has been pushed to the repository (volume 15) but >>>>>>>>>>>> >>>> that is rather trivial, out of date, and incomplete. >>>>>>>>>>>> >>>> >>>>>>>>>>>> >>>> I'm clearly not smart enough to prove the Risch >>>>>>>>>>>> >>>> algorithm and its associated machinery but the needed >>>>>>>>>>>> >>>> definitions and theorems will be available to someone >>>>>>>>>>>> >>>> who wants to try. >>>>>>>>>>>> >>>> >>>>>>>>>>>> >>>> [0] >>>>>>>>>>>> https://github.com/daly/PDFS/blob/master/bookvolbib.pdf >>>>>>>>>>>> >>>> >>>>>>>>>>>> >>>> >>>>>>>>>>>> >>>> On 8/19/21, Tim Daly wrote: >>>>>>>>>>>> >>>>> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D >>>>>>>>>>>> >>>>> >>>>>>>>>>>> >>>>> REVIEW (Axiom on WSL2 Windows) >>>>>>>>>>>> >>>>> >>>>>>>>>>>> >>>>> >>>>>>>>>>>> >>>>> So the steps to run Axiom from a Windows desktop >>>>>>>>>>>> >>>>> >>>>>>>>>>>> >>>>> 1 Windows) install XMing on Windows for X11 server >>>>>>>>>>>> >>>>> >>>>>>>>>>>> >>>>> http://www.straightrunning.com/XmingNotes/ >>>>>>>>>>>> >>>>> >>>>>>>>>>>> >>>>> 2 WSL2) Install Axiom in WSL2 >>>>>>>>>>>> >>>>> >>>>>>>>>>>> >>>>> sudo apt install axiom >>>>>>>>>>>> >>>>> >>>>>>>>>>>> >>>>> 3 WSL2) modify /usr/bin/axiom to fix the bug: >>>>>>>>>>>> >>>>> (someone changed the axiom startup script. >>>>>>>>>>>> >>>>> It won't work on WSL2. I don't know who or >>>>>>>>>>>> >>>>> how to get it fixed). >>>>>>>>>>>> >>>>> >>>>>>>>>>>> >>>>> sudo emacs /usr/bin/axiom >>>>>>>>>>>> >>>>> >>>>>>>>>>>> >>>>> (split the line into 3 and add quote marks) >>>>>>>>>>>> >>>>> >>>>>>>>>>>> >>>>> export SPADDEFAULT=3D/usr/local/axiom/mnt/linux >>>>>>>>>>>> >>>>> export AXIOM=3D/usr/lib/axiom-20170501 >>>>>>>>>>>> >>>>> export "PATH=3D/usr/lib/axiom-20170501/bin:$PATH" >>>>>>>>>>>> >>>>> >>>>>>>>>>>> >>>>> 4 WSL2) create a .axiom.input file to include startup >>>>>>>>>>>> cmds: >>>>>>>>>>>> >>>>> >>>>>>>>>>>> >>>>> emacs .axiom.input >>>>>>>>>>>> >>>>> >>>>>>>>>>>> >>>>> )cd "/mnt/c/yourpath" >>>>>>>>>>>> >>>>> )sys pwd >>>>>>>>>>>> >>>>> >>>>>>>>>>>> >>>>> 5 WSL2) create a "myaxiom" command that sets the >>>>>>>>>>>> >>>>> DISPLAY variable and starts axiom >>>>>>>>>>>> >>>>> >>>>>>>>>>>> >>>>> emacs myaxiom >>>>>>>>>>>> >>>>> >>>>>>>>>>>> >>>>> #! /bin/bash >>>>>>>>>>>> >>>>> export DISPLAY=3D:0.0 >>>>>>>>>>>> >>>>> axiom >>>>>>>>>>>> >>>>> >>>>>>>>>>>> >>>>> 6 WSL2) put it in the /usr/bin directory >>>>>>>>>>>> >>>>> >>>>>>>>>>>> >>>>> chmod +x myaxiom >>>>>>>>>>>> >>>>> sudo cp myaxiom /usr/bin/myaxiom >>>>>>>>>>>> >>>>> >>>>>>>>>>>> >>>>> 7 WINDOWS) start the X11 server >>>>>>>>>>>> >>>>> >>>>>>>>>>>> >>>>> (XMing XLaunch Icon on your desktop) >>>>>>>>>>>> >>>>> >>>>>>>>>>>> >>>>> 8 WINDOWS) run myaxiom from PowerShell >>>>>>>>>>>> >>>>> (this should start axiom with graphics available) >>>>>>>>>>>> >>>>> >>>>>>>>>>>> >>>>> wsl myaxiom >>>>>>>>>>>> >>>>> >>>>>>>>>>>> >>>>> 8 WINDOWS) make a PowerShell desktop >>>>>>>>>>>> >>>>> >>>>>>>>>>>> >>>>> >>>>>>>>>>>> https://superuser.com/questions/886951/run-powershell-script-w= hen-you-open-powershell >>>>>>>>>>>> >>>>> >>>>>>>>>>>> >>>>> Tim >>>>>>>>>>>> >>>>> >>>>>>>>>>>> >>>>> On 8/13/21, Tim Daly wrote: >>>>>>>>>>>> >>>>>> A great deal of thought is directed toward making the >>>>>>>>>>>> SANE version >>>>>>>>>>>> >>>>>> of Axiom as flexible as possible, decoupling mechanism >>>>>>>>>>>> from theory. >>>>>>>>>>>> >>>>>> >>>>>>>>>>>> >>>>>> An interesting publication by Brian Cantwell Smith [0], >>>>>>>>>>>> "Reflection >>>>>>>>>>>> >>>>>> and Semantics in LISP" seems to contain interesting >>>>>>>>>>>> ideas related >>>>>>>>>>>> >>>>>> to our goal. Of particular interest is the ability to >>>>>>>>>>>> reason about >>>>>>>>>>>> >>>>>> and >>>>>>>>>>>> >>>>>> perform self-referential manipulations. In a >>>>>>>>>>>> dependently-typed >>>>>>>>>>>> >>>>>> system it seems interesting to be able "adapt" code to >>>>>>>>>>>> handle >>>>>>>>>>>> >>>>>> run-time computed arguments to dependent functions. The >>>>>>>>>>>> abstract: >>>>>>>>>>>> >>>>>> >>>>>>>>>>>> >>>>>> "We show how a computational system can be >>>>>>>>>>>> constructed to >>>>>>>>>>>> >>>>>> "reason", >>>>>>>>>>>> >>>>>> effectively >>>>>>>>>>>> >>>>>> and consequentially, about its own inferential >>>>>>>>>>>> processes. The >>>>>>>>>>>> >>>>>> analysis proceeds in two >>>>>>>>>>>> >>>>>> parts. First, we consider the general question of >>>>>>>>>>>> computational >>>>>>>>>>>> >>>>>> semantics, rejecting >>>>>>>>>>>> >>>>>> traditional approaches, and arguing that the >>>>>>>>>>>> declarative and >>>>>>>>>>>> >>>>>> procedural aspects of >>>>>>>>>>>> >>>>>> computational symbols (what they stand for, and what >>>>>>>>>>>> behaviour >>>>>>>>>>>> >>>>>> they >>>>>>>>>>>> >>>>>> engender) should be >>>>>>>>>>>> >>>>>> analysed independently, in order that they may be >>>>>>>>>>>> coherently >>>>>>>>>>>> >>>>>> related. Second, we >>>>>>>>>>>> >>>>>> investigate self-referential behavior in >>>>>>>>>>>> computational processes, >>>>>>>>>>>> >>>>>> and show how to embed an >>>>>>>>>>>> >>>>>> effective procedural model of a computational >>>>>>>>>>>> calculus within that >>>>>>>>>>>> >>>>>> calculus (a model not >>>>>>>>>>>> >>>>>> unlike a meta-circular interpreter, but connected to >>>>>>>>>>>> the >>>>>>>>>>>> >>>>>> fundamental operations of the >>>>>>>>>>>> >>>>>> machine in such a way as to provide, at any point in= a >>>>>>>>>>>> >>>>>> computation, >>>>>>>>>>>> >>>>>> fully articulated >>>>>>>>>>>> >>>>>> descriptions of the state of that computation, for >>>>>>>>>>>> inspection and >>>>>>>>>>>> >>>>>> possible modification). In >>>>>>>>>>>> >>>>>> terms of the theories that result from these >>>>>>>>>>>> investigations, we >>>>>>>>>>>> >>>>>> present a general architecture >>>>>>>>>>>> >>>>>> for procedurally reflective processes, able to shift >>>>>>>>>>>> smoothly >>>>>>>>>>>> >>>>>> between dealing with a given >>>>>>>>>>>> >>>>>> subject domain, and dealing with their own reasoning >>>>>>>>>>>> processes >>>>>>>>>>>> >>>>>> over >>>>>>>>>>>> >>>>>> that domain. >>>>>>>>>>>> >>>>>> >>>>>>>>>>>> >>>>>> An instance of the general solution is worked out in >>>>>>>>>>>> the context >>>>>>>>>>>> >>>>>> of >>>>>>>>>>>> >>>>>> an applicative >>>>>>>>>>>> >>>>>> language. Specifically, we present three successive >>>>>>>>>>>> dialects of >>>>>>>>>>>> >>>>>> LISP: 1-LISP, a distillation of >>>>>>>>>>>> >>>>>> current practice, for comparison purposes; 2-LISP, a >>>>>>>>>>>> dialect >>>>>>>>>>>> >>>>>> constructed in terms of our >>>>>>>>>>>> >>>>>> rationalised semantics, in which the concept of >>>>>>>>>>>> evaluation is >>>>>>>>>>>> >>>>>> rejected in favour of >>>>>>>>>>>> >>>>>> independent notions of simplification and reference, >>>>>>>>>>>> and in which >>>>>>>>>>>> >>>>>> the respective categories >>>>>>>>>>>> >>>>>> of notation, structure, semantics, and behaviour are >>>>>>>>>>>> strictly >>>>>>>>>>>> >>>>>> aligned; and 3-LISP, an >>>>>>>>>>>> >>>>>> extension of 2-LISP endowed with reflective powers." >>>>>>>>>>>> >>>>>> >>>>>>>>>>>> >>>>>> Axiom SANE builds dependent types on the fly. The >>>>>>>>>>>> ability to access >>>>>>>>>>>> >>>>>> both the refection >>>>>>>>>>>> >>>>>> of the tower of algebra and the reflection of the tower >>>>>>>>>>>> of proofs at >>>>>>>>>>>> >>>>>> the time of construction >>>>>>>>>>>> >>>>>> makes the construction of a new domain or specific >>>>>>>>>>>> algorithm easier >>>>>>>>>>>> >>>>>> and more general. >>>>>>>>>>>> >>>>>> >>>>>>>>>>>> >>>>>> This is of particular interest because one of the >>>>>>>>>>>> efforts is to build >>>>>>>>>>>> >>>>>> "all the way down to the >>>>>>>>>>>> >>>>>> metal". If each layer is constructed on top of previous >>>>>>>>>>>> proven layers >>>>>>>>>>>> >>>>>> and the new layer >>>>>>>>>>>> >>>>>> can "reach below" to lower layers then the tower of >>>>>>>>>>>> layers can be >>>>>>>>>>>> >>>>>> built without duplication. >>>>>>>>>>>> >>>>>> >>>>>>>>>>>> >>>>>> Tim >>>>>>>>>>>> >>>>>> >>>>>>>>>>>> >>>>>> [0], Smith, Brian Cantwell "Reflection and Semantics in >>>>>>>>>>>> LISP" >>>>>>>>>>>> >>>>>> POPL '84: Proceedings of the 11th ACM SIGACT-SIGPLAN >>>>>>>>>>>> >>>>>> ymposium on Principles of programming languagesJanuary = 1 >>>>>>>>>>>> >>>>>> 984 Pages 23=E2=80=9335https://doi.org/10.1145/800017.8= 00513 >>>>>>>>>>>> >>>>>> >>>>>>>>>>>> >>>>>> On 6/29/21, Tim Daly wrote: >>>>>>>>>>>> >>>>>>> Having spent time playing with hardware it is perfectl= y >>>>>>>>>>>> clear that >>>>>>>>>>>> >>>>>>> future computational mathematics efforts need to adapt >>>>>>>>>>>> to using >>>>>>>>>>>> >>>>>>> parallel processing. >>>>>>>>>>>> >>>>>>> >>>>>>>>>>>> >>>>>>> I've spent a fair bit of time thinking about >>>>>>>>>>>> structuring Axiom to >>>>>>>>>>>> >>>>>>> be parallel. Most past efforts have tried to focus on >>>>>>>>>>>> making a >>>>>>>>>>>> >>>>>>> particular algorithm parallel, such as a matrix >>>>>>>>>>>> multiply. >>>>>>>>>>>> >>>>>>> >>>>>>>>>>>> >>>>>>> But I think that it might be more effective to make >>>>>>>>>>>> each domain >>>>>>>>>>>> >>>>>>> run in parallel. A computation crosses multiple domain= s >>>>>>>>>>>> so a >>>>>>>>>>>> >>>>>>> particular computation could involve multiple parallel >>>>>>>>>>>> copies. >>>>>>>>>>>> >>>>>>> >>>>>>>>>>>> >>>>>>> For example, computing the Cylindrical Algebraic >>>>>>>>>>>> Decomposition >>>>>>>>>>>> >>>>>>> could recursively decompose the plane. Indeed, any >>>>>>>>>>>> tree-recursive >>>>>>>>>>>> >>>>>>> algorithm could be run in parallel "in the large" by >>>>>>>>>>>> creating new >>>>>>>>>>>> >>>>>>> running copies of the domain for each sub-problem. >>>>>>>>>>>> >>>>>>> >>>>>>>>>>>> >>>>>>> So the question becomes, how does one manage this? >>>>>>>>>>>> >>>>>>> >>>>>>>>>>>> >>>>>>> A similar problem occurs in robotics where one could >>>>>>>>>>>> have multiple >>>>>>>>>>>> >>>>>>> wheels, arms, propellers, etc. that need to act >>>>>>>>>>>> independently but >>>>>>>>>>>> >>>>>>> in coordination. >>>>>>>>>>>> >>>>>>> >>>>>>>>>>>> >>>>>>> The robot solution uses ROS2. The three ideas are >>>>>>>>>>>> ROSCORE, >>>>>>>>>>>> >>>>>>> TOPICS with publish/subscribe, and SERVICES with >>>>>>>>>>>> request/response. >>>>>>>>>>>> >>>>>>> These are communication paths defined between processe= s. >>>>>>>>>>>> >>>>>>> >>>>>>>>>>>> >>>>>>> ROS2 has a "roscore" which is basically a phonebook of >>>>>>>>>>>> "topics". >>>>>>>>>>>> >>>>>>> Any process can create or look up the current active >>>>>>>>>>>> topics. eq: >>>>>>>>>>>> >>>>>>> >>>>>>>>>>>> >>>>>>> rosnode list >>>>>>>>>>>> >>>>>>> >>>>>>>>>>>> >>>>>>> TOPICS: >>>>>>>>>>>> >>>>>>> >>>>>>>>>>>> >>>>>>> Any process can PUBLISH a topic (which is basically a >>>>>>>>>>>> typed data >>>>>>>>>>>> >>>>>>> structure), e.g the topic /hw with the String data >>>>>>>>>>>> "Hello World". >>>>>>>>>>>> >>>>>>> eg: >>>>>>>>>>>> >>>>>>> >>>>>>>>>>>> >>>>>>> rostopic pub /hw std_msgs/String "Hello, World" >>>>>>>>>>>> >>>>>>> >>>>>>>>>>>> >>>>>>> Any process can SUBSCRIBE to a topic, such as /hw, and >>>>>>>>>>>> get a >>>>>>>>>>>> >>>>>>> copy of the data. eg: >>>>>>>>>>>> >>>>>>> >>>>>>>>>>>> >>>>>>> rostopic echo /hw =3D=3D> "Hello, World" >>>>>>>>>>>> >>>>>>> >>>>>>>>>>>> >>>>>>> Publishers talk, subscribers listen. >>>>>>>>>>>> >>>>>>> >>>>>>>>>>>> >>>>>>> >>>>>>>>>>>> >>>>>>> SERVICES: >>>>>>>>>>>> >>>>>>> >>>>>>>>>>>> >>>>>>> Any process can make a REQUEST of a SERVICE and get a >>>>>>>>>>>> RESPONSE. >>>>>>>>>>>> >>>>>>> This is basically a remote function call. >>>>>>>>>>>> >>>>>>> >>>>>>>>>>>> >>>>>>> >>>>>>>>>>>> >>>>>>> >>>>>>>>>>>> >>>>>>> Axiom in parallel? >>>>>>>>>>>> >>>>>>> >>>>>>>>>>>> >>>>>>> So domains could run, each in its own process. It coul= d >>>>>>>>>>>> provide >>>>>>>>>>>> >>>>>>> services, one for each function. Any other process >>>>>>>>>>>> could request >>>>>>>>>>>> >>>>>>> a computation and get the result as a response. Domain= s >>>>>>>>>>>> could >>>>>>>>>>>> >>>>>>> request services from other domains, either waiting fo= r >>>>>>>>>>>> responses >>>>>>>>>>>> >>>>>>> or continuing while the response is being computed. >>>>>>>>>>>> >>>>>>> >>>>>>>>>>>> >>>>>>> The output could be sent anywhere, to a terminal, to a >>>>>>>>>>>> browser, >>>>>>>>>>>> >>>>>>> to a network, or to another process using the >>>>>>>>>>>> publish/subscribe >>>>>>>>>>>> >>>>>>> protocol, potentially all at the same time since there >>>>>>>>>>>> can be many >>>>>>>>>>>> >>>>>>> subscribers to a topic. >>>>>>>>>>>> >>>>>>> >>>>>>>>>>>> >>>>>>> Available domains could be dynamically added by >>>>>>>>>>>> announcing >>>>>>>>>>>> >>>>>>> themselves as new "topics" and could be dynamically >>>>>>>>>>>> looked-up >>>>>>>>>>>> >>>>>>> at runtime. >>>>>>>>>>>> >>>>>>> >>>>>>>>>>>> >>>>>>> This structure allows function-level / domain-level >>>>>>>>>>>> parallelism. >>>>>>>>>>>> >>>>>>> It is very effective in the robot world and I think it >>>>>>>>>>>> might be a >>>>>>>>>>>> >>>>>>> good structuring mechanism to allow computational >>>>>>>>>>>> mathematics >>>>>>>>>>>> >>>>>>> to take advantage of multiple processors in a >>>>>>>>>>>> disciplined fashion. >>>>>>>>>>>> >>>>>>> >>>>>>>>>>>> >>>>>>> Axiom has a thousand domains and each could run on its >>>>>>>>>>>> own core. >>>>>>>>>>>> >>>>>>> >>>>>>>>>>>> >>>>>>> In addition. notice that each domain is independent of >>>>>>>>>>>> the others. >>>>>>>>>>>> >>>>>>> So if we want to use BLAS Fortran code, it could just >>>>>>>>>>>> be another >>>>>>>>>>>> >>>>>>> service node. In fact, any "foreign function" could >>>>>>>>>>>> transparently >>>>>>>>>>>> >>>>>>> cooperate in a distributed Axiom. >>>>>>>>>>>> >>>>>>> >>>>>>>>>>>> >>>>>>> Another key feature is that proofs can be "by node". >>>>>>>>>>>> >>>>>>> >>>>>>>>>>>> >>>>>>> Tim >>>>>>>>>>>> >>>>>>> >>>>>>>>>>>> >>>>>>> >>>>>>>>>>>> >>>>>>> >>>>>>>>>>>> >>>>>>> >>>>>>>>>>>> >>>>>>> On 6/5/21, Tim Daly wrote: >>>>>>>>>>>> >>>>>>>> Axiom is based on first-class dependent types. >>>>>>>>>>>> Deciding when >>>>>>>>>>>> >>>>>>>> two types are equivalent may involve computation. See >>>>>>>>>>>> >>>>>>>> Christiansen, David Thrane "Checking Dependent Types >>>>>>>>>>>> with >>>>>>>>>>>> >>>>>>>> Normalization by Evaluation" (2019) >>>>>>>>>>>> >>>>>>>> >>>>>>>>>>>> >>>>>>>> This puts an interesting constraint on building types= . >>>>>>>>>>>> The >>>>>>>>>>>> >>>>>>>> constructed types has to export a function to decide >>>>>>>>>>>> if a >>>>>>>>>>>> >>>>>>>> given type is "equivalent" to itself. >>>>>>>>>>>> >>>>>>>> >>>>>>>>>>>> >>>>>>>> The notion of "equivalence" might involve category >>>>>>>>>>>> ideas >>>>>>>>>>>> >>>>>>>> of natural transformation and univalence. Sigh. >>>>>>>>>>>> >>>>>>>> >>>>>>>>>>>> >>>>>>>> That's an interesting design point. >>>>>>>>>>>> >>>>>>>> >>>>>>>>>>>> >>>>>>>> Tim >>>>>>>>>>>> >>>>>>>> >>>>>>>>>>>> >>>>>>>> >>>>>>>>>>>> >>>>>>>> On 5/5/21, Tim Daly wrote: >>>>>>>>>>>> >>>>>>>>> It is interesting that programmer's eyes and >>>>>>>>>>>> expectations adapt >>>>>>>>>>>> >>>>>>>>> to the tools they use. For instance, I use emacs and >>>>>>>>>>>> expect to >>>>>>>>>>>> >>>>>>>>> work directly in files and multiple buffers. When I >>>>>>>>>>>> try to use one >>>>>>>>>>>> >>>>>>>>> of the many IDE tools I find they tend to "get in th= e >>>>>>>>>>>> way". I >>>>>>>>>>>> >>>>>>>>> already >>>>>>>>>>>> >>>>>>>>> know or can quickly find whatever they try to tell >>>>>>>>>>>> me. If you use >>>>>>>>>>>> >>>>>>>>> an >>>>>>>>>>>> >>>>>>>>> IDE you probably find emacs "too sparse" for >>>>>>>>>>>> programming. >>>>>>>>>>>> >>>>>>>>> >>>>>>>>>>>> >>>>>>>>> Recently I've been working in a sparse programming >>>>>>>>>>>> environment. >>>>>>>>>>>> >>>>>>>>> I'm exploring the question of running a proof checke= r >>>>>>>>>>>> in an FPGA. >>>>>>>>>>>> >>>>>>>>> The FPGA development tools are painful at best and >>>>>>>>>>>> not intuitive >>>>>>>>>>>> >>>>>>>>> since you SEEM to be programming but you're actually >>>>>>>>>>>> describing >>>>>>>>>>>> >>>>>>>>> hardware gates, connections, and timing. This is an >>>>>>>>>>>> environment >>>>>>>>>>>> >>>>>>>>> where everything happens all-at-once and all-the-tim= e >>>>>>>>>>>> (like the >>>>>>>>>>>> >>>>>>>>> circuits in your computer). It is the "assembly >>>>>>>>>>>> language of >>>>>>>>>>>> >>>>>>>>> circuits". >>>>>>>>>>>> >>>>>>>>> Naturally, my eyes have adapted to this rather raw >>>>>>>>>>>> level. >>>>>>>>>>>> >>>>>>>>> >>>>>>>>>>>> >>>>>>>>> That said, I'm normally doing literate programming >>>>>>>>>>>> all the time. >>>>>>>>>>>> >>>>>>>>> My typical file is a document which is a mixture of >>>>>>>>>>>> latex and >>>>>>>>>>>> >>>>>>>>> lisp. >>>>>>>>>>>> >>>>>>>>> It is something of a shock to return to that world. >>>>>>>>>>>> It is clear >>>>>>>>>>>> >>>>>>>>> why >>>>>>>>>>>> >>>>>>>>> people who program in Python find lisp to be a "sea >>>>>>>>>>>> of parens". >>>>>>>>>>>> >>>>>>>>> Yet as a lisp programmer, I don't even see the >>>>>>>>>>>> parens, just code. >>>>>>>>>>>> >>>>>>>>> >>>>>>>>>>>> >>>>>>>>> It takes a few minutes in a literate document to >>>>>>>>>>>> adapt vision to >>>>>>>>>>>> >>>>>>>>> see the latex / lisp combination as natural. The >>>>>>>>>>>> latex markup, >>>>>>>>>>>> >>>>>>>>> like the lisp parens, eventually just disappears. >>>>>>>>>>>> What remains >>>>>>>>>>>> >>>>>>>>> is just lisp and natural language text. >>>>>>>>>>>> >>>>>>>>> >>>>>>>>>>>> >>>>>>>>> This seems painful at first but eyes quickly adapt. >>>>>>>>>>>> The upside >>>>>>>>>>>> >>>>>>>>> is that there is always a "finished" document that >>>>>>>>>>>> describes the >>>>>>>>>>>> >>>>>>>>> state of the code. The overhead of writing a >>>>>>>>>>>> paragraph to >>>>>>>>>>>> >>>>>>>>> describe a new function or change a paragraph to >>>>>>>>>>>> describe the >>>>>>>>>>>> >>>>>>>>> changed function is very small. >>>>>>>>>>>> >>>>>>>>> >>>>>>>>>>>> >>>>>>>>> Using a Makefile I latex the document to generate a >>>>>>>>>>>> current PDF >>>>>>>>>>>> >>>>>>>>> and then I extract, load, and execute the code. This >>>>>>>>>>>> loop catches >>>>>>>>>>>> >>>>>>>>> errors in both the latex and the source code. Keepin= g >>>>>>>>>>>> an open file >>>>>>>>>>>> >>>>>>>>> in >>>>>>>>>>>> >>>>>>>>> my pdf viewer shows all of the changes in the >>>>>>>>>>>> document after every >>>>>>>>>>>> >>>>>>>>> run of make. That way I can edit the book as easily >>>>>>>>>>>> as the code. >>>>>>>>>>>> >>>>>>>>> >>>>>>>>>>>> >>>>>>>>> Ultimately I find that writing the book while writin= g >>>>>>>>>>>> the code is >>>>>>>>>>>> >>>>>>>>> more productive. I don't have to remember why I wrot= e >>>>>>>>>>>> something >>>>>>>>>>>> >>>>>>>>> since the explanation is already there. >>>>>>>>>>>> >>>>>>>>> >>>>>>>>>>>> >>>>>>>>> We all have our own way of programming and our own >>>>>>>>>>>> tools. >>>>>>>>>>>> >>>>>>>>> But I find literate programming to be a real advance >>>>>>>>>>>> over IDE >>>>>>>>>>>> >>>>>>>>> style programming and "raw code" programming. >>>>>>>>>>>> >>>>>>>>> >>>>>>>>>>>> >>>>>>>>> Tim >>>>>>>>>>>> >>>>>>>>> >>>>>>>>>>>> >>>>>>>>> >>>>>>>>>>>> >>>>>>>>> On 2/27/21, Tim Daly wrote: >>>>>>>>>>>> >>>>>>>>>> The systems I use have the interesting property of >>>>>>>>>>>> >>>>>>>>>> "Living within the compiler". >>>>>>>>>>>> >>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>> Lisp, Forth, Emacs, and other systems that present >>>>>>>>>>>> themselves >>>>>>>>>>>> >>>>>>>>>> through the Read-Eval-Print-Loop (REPL) allow the >>>>>>>>>>>> >>>>>>>>>> ability to deeply interact with the system, shaping >>>>>>>>>>>> it to your >>>>>>>>>>>> >>>>>>>>>> need. >>>>>>>>>>>> >>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>> My current thread of study is software architecture= . >>>>>>>>>>>> See >>>>>>>>>>>> >>>>>>>>>> >>>>>>>>>>>> https://www.youtube.com/watch?v=3DW2hagw1VhhI&feature=3Dyoutu.= be >>>>>>>>>>>> >>>>>>>>>> and https://www.georgefairbanks.com/videos/ >>>>>>>>>>>> >>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>> My current thinking on SANE involves the ability to >>>>>>>>>>>> >>>>>>>>>> dynamically define categories, representations, and >>>>>>>>>>>> functions >>>>>>>>>>>> >>>>>>>>>> along with "composition functions" that permits >>>>>>>>>>>> choosing a >>>>>>>>>>>> >>>>>>>>>> combination at the time of use. >>>>>>>>>>>> >>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>> You might want a domain for handling polynomials. >>>>>>>>>>>> There are >>>>>>>>>>>> >>>>>>>>>> a lot of choices, depending on your use case. You >>>>>>>>>>>> might want >>>>>>>>>>>> >>>>>>>>>> different representations. For example, you might >>>>>>>>>>>> want dense, >>>>>>>>>>>> >>>>>>>>>> sparse, recursive, or "machine compatible fixnums" >>>>>>>>>>>> (e.g. to >>>>>>>>>>>> >>>>>>>>>> interface with C code). If these don't exist it >>>>>>>>>>>> ought to be >>>>>>>>>>>> >>>>>>>>>> possible >>>>>>>>>>>> >>>>>>>>>> to create them. Such "lego-like" building blocks >>>>>>>>>>>> require careful >>>>>>>>>>>> >>>>>>>>>> thought about creating "fully factored" objects. >>>>>>>>>>>> >>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>> Given that goal, the traditional barrier of >>>>>>>>>>>> "compiler" vs >>>>>>>>>>>> >>>>>>>>>> "interpreter" >>>>>>>>>>>> >>>>>>>>>> does not seem useful. It is better to "live within >>>>>>>>>>>> the compiler" >>>>>>>>>>>> >>>>>>>>>> which >>>>>>>>>>>> >>>>>>>>>> gives the ability to define new things "on the fly"= . >>>>>>>>>>>> >>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>> Of course, the SANE compiler is going to want an >>>>>>>>>>>> associated >>>>>>>>>>>> >>>>>>>>>> proof of the functions you create along with the >>>>>>>>>>>> other parts >>>>>>>>>>>> >>>>>>>>>> such as its category hierarchy and representation >>>>>>>>>>>> properties. >>>>>>>>>>>> >>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>> There is no such thing as a simple job. :-) >>>>>>>>>>>> >>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>> Tim >>>>>>>>>>>> >>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>> On 2/18/21, Tim Daly wrote: >>>>>>>>>>>> >>>>>>>>>>> The Axiom SANE compiler / interpreter has a few >>>>>>>>>>>> design points. >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>> 1) It needs to mix interpreted and compiled code i= n >>>>>>>>>>>> the same >>>>>>>>>>>> >>>>>>>>>>> function. >>>>>>>>>>>> >>>>>>>>>>> SANE allows dynamic construction of code as well a= s >>>>>>>>>>>> dynamic type >>>>>>>>>>>> >>>>>>>>>>> construction at runtime. Both of these can occur i= n >>>>>>>>>>>> a runtime >>>>>>>>>>>> >>>>>>>>>>> object. >>>>>>>>>>>> >>>>>>>>>>> So there is potentially a mixture of interpreted >>>>>>>>>>>> and compiled >>>>>>>>>>>> >>>>>>>>>>> code. >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>> 2) It needs to perform type resolution at compile >>>>>>>>>>>> time without >>>>>>>>>>>> >>>>>>>>>>> overhead >>>>>>>>>>>> >>>>>>>>>>> where possible. Since this is not always possible >>>>>>>>>>>> there needs to >>>>>>>>>>>> >>>>>>>>>>> be >>>>>>>>>>>> >>>>>>>>>>> a "prefix thunk" that will perform the resolution. >>>>>>>>>>>> Trivially, >>>>>>>>>>>> >>>>>>>>>>> for >>>>>>>>>>>> >>>>>>>>>>> example, >>>>>>>>>>>> >>>>>>>>>>> if we have a + function we need to type-resolve th= e >>>>>>>>>>>> arguments. >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>> However, if we can prove at compile time that the >>>>>>>>>>>> types are both >>>>>>>>>>>> >>>>>>>>>>> bounded-NNI and the result is bounded-NNI (i.e. >>>>>>>>>>>> fixnum in lisp) >>>>>>>>>>>> >>>>>>>>>>> then we can inline a call to + at runtime. If not, >>>>>>>>>>>> we might have >>>>>>>>>>>> >>>>>>>>>>> + applied to NNI and POLY(FLOAT), which requires a >>>>>>>>>>>> thunk to >>>>>>>>>>>> >>>>>>>>>>> resolve types. The thunk could even "specialize an= d >>>>>>>>>>>> compile" >>>>>>>>>>>> >>>>>>>>>>> the code before executing it. >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>> It turns out that the Forth implementation of >>>>>>>>>>>> >>>>>>>>>>> "threaded-interpreted" >>>>>>>>>>>> >>>>>>>>>>> languages model provides an efficient and effectiv= e >>>>>>>>>>>> way to do >>>>>>>>>>>> >>>>>>>>>>> this.[0] >>>>>>>>>>>> >>>>>>>>>>> Type resolution can be "inserted" in intermediate >>>>>>>>>>>> thunks. >>>>>>>>>>>> >>>>>>>>>>> The model also supports dynamic overloading and >>>>>>>>>>>> tail recursion. >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>> Combining high-level CLOS code with low-level >>>>>>>>>>>> threading gives an >>>>>>>>>>>> >>>>>>>>>>> easy to understand and robust design. >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>> Tim >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>> [0] Loeliger, R.G. "Threaded Interpretive >>>>>>>>>>>> Languages" (1981) >>>>>>>>>>>> >>>>>>>>>>> ISBN 0-07-038360-X >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>> On 2/5/21, Tim Daly wrote: >>>>>>>>>>>> >>>>>>>>>>>> I've worked hard to make Axiom depend on almost n= o >>>>>>>>>>>> other >>>>>>>>>>>> >>>>>>>>>>>> tools so that it would not get caught by "code >>>>>>>>>>>> rot" of >>>>>>>>>>>> >>>>>>>>>>>> libraries. >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> However, I'm also trying to make the new SANE >>>>>>>>>>>> version much >>>>>>>>>>>> >>>>>>>>>>>> easier to understand and debug.To that end I've >>>>>>>>>>>> been >>>>>>>>>>>> >>>>>>>>>>>> experimenting >>>>>>>>>>>> >>>>>>>>>>>> with some ideas. >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> It should be possible to view source code, of >>>>>>>>>>>> course. But the >>>>>>>>>>>> >>>>>>>>>>>> source >>>>>>>>>>>> >>>>>>>>>>>> code is not the only, nor possibly the best, >>>>>>>>>>>> representation of >>>>>>>>>>>> >>>>>>>>>>>> the >>>>>>>>>>>> >>>>>>>>>>>> ideas. >>>>>>>>>>>> >>>>>>>>>>>> In particular, source code gets compiled into dat= a >>>>>>>>>>>> structures. >>>>>>>>>>>> >>>>>>>>>>>> In >>>>>>>>>>>> >>>>>>>>>>>> Axiom >>>>>>>>>>>> >>>>>>>>>>>> these data structures really are a graph of >>>>>>>>>>>> related structures. >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> For example, looking at the gcd function from NNI= , >>>>>>>>>>>> there is the >>>>>>>>>>>> >>>>>>>>>>>> representation of the gcd function itself. But >>>>>>>>>>>> there is also a >>>>>>>>>>>> >>>>>>>>>>>> structure >>>>>>>>>>>> >>>>>>>>>>>> that is the REP (and, in the new system, is >>>>>>>>>>>> separate from the >>>>>>>>>>>> >>>>>>>>>>>> domain). >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> Further, there are associated specification and >>>>>>>>>>>> proof >>>>>>>>>>>> >>>>>>>>>>>> structures. >>>>>>>>>>>> >>>>>>>>>>>> Even >>>>>>>>>>>> >>>>>>>>>>>> further, the domain inherits the category >>>>>>>>>>>> structures, and from >>>>>>>>>>>> >>>>>>>>>>>> those >>>>>>>>>>>> >>>>>>>>>>>> it >>>>>>>>>>>> >>>>>>>>>>>> inherits logical axioms and definitions through >>>>>>>>>>>> the proof >>>>>>>>>>>> >>>>>>>>>>>> structure. >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> Clearly the gcd function is a node in a much >>>>>>>>>>>> larger graph >>>>>>>>>>>> >>>>>>>>>>>> structure. >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> When trying to decide why code won't compile it >>>>>>>>>>>> would be useful >>>>>>>>>>>> >>>>>>>>>>>> to >>>>>>>>>>>> >>>>>>>>>>>> be able to see and walk these structures. I've >>>>>>>>>>>> thought about >>>>>>>>>>>> >>>>>>>>>>>> using >>>>>>>>>>>> >>>>>>>>>>>> the >>>>>>>>>>>> >>>>>>>>>>>> browser but browsers are too weak. Either >>>>>>>>>>>> everything has to be >>>>>>>>>>>> >>>>>>>>>>>> "in >>>>>>>>>>>> >>>>>>>>>>>> a >>>>>>>>>>>> >>>>>>>>>>>> single tab to show the graph" or "the nodes of th= e >>>>>>>>>>>> graph are in >>>>>>>>>>>> >>>>>>>>>>>> different >>>>>>>>>>>> >>>>>>>>>>>> tabs". Plus, constructing dynamic graphs that >>>>>>>>>>>> change as the >>>>>>>>>>>> >>>>>>>>>>>> software >>>>>>>>>>>> >>>>>>>>>>>> changes (e.g. by loading a new spad file or >>>>>>>>>>>> creating a new >>>>>>>>>>>> >>>>>>>>>>>> function) >>>>>>>>>>>> >>>>>>>>>>>> represents the huge problem of keeping the browse= r >>>>>>>>>>>> "in sync >>>>>>>>>>>> >>>>>>>>>>>> with >>>>>>>>>>>> >>>>>>>>>>>> the >>>>>>>>>>>> >>>>>>>>>>>> Axiom workspace". So something more dynamic and >>>>>>>>>>>> embedded is >>>>>>>>>>>> >>>>>>>>>>>> needed. >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> Axiom source gets compiled into CLOS data >>>>>>>>>>>> structures. Each of >>>>>>>>>>>> >>>>>>>>>>>> these >>>>>>>>>>>> >>>>>>>>>>>> new SANE structures has an associated surface >>>>>>>>>>>> representation, >>>>>>>>>>>> >>>>>>>>>>>> so >>>>>>>>>>>> >>>>>>>>>>>> they >>>>>>>>>>>> >>>>>>>>>>>> can be presented in user-friendly form. >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> Also, since Axiom is literate software, it should >>>>>>>>>>>> be possible >>>>>>>>>>>> >>>>>>>>>>>> to >>>>>>>>>>>> >>>>>>>>>>>> look >>>>>>>>>>>> >>>>>>>>>>>> at >>>>>>>>>>>> >>>>>>>>>>>> the code in its literate form with the surroundin= g >>>>>>>>>>>> explanation. >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> Essentially we'd like to have the ability to "dee= p >>>>>>>>>>>> dive" into >>>>>>>>>>>> >>>>>>>>>>>> the >>>>>>>>>>>> >>>>>>>>>>>> Axiom >>>>>>>>>>>> >>>>>>>>>>>> workspace, not only for debugging, but also for >>>>>>>>>>>> understanding >>>>>>>>>>>> >>>>>>>>>>>> what >>>>>>>>>>>> >>>>>>>>>>>> functions are used, where they come from, what >>>>>>>>>>>> they inherit, >>>>>>>>>>>> >>>>>>>>>>>> and >>>>>>>>>>>> >>>>>>>>>>>> how they are used in a computation. >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> To that end I'm looking at using McClim, a lisp >>>>>>>>>>>> windowing >>>>>>>>>>>> >>>>>>>>>>>> system. >>>>>>>>>>>> >>>>>>>>>>>> Since the McClim windows would be part of the lis= p >>>>>>>>>>>> image, they >>>>>>>>>>>> >>>>>>>>>>>> have >>>>>>>>>>>> >>>>>>>>>>>> access to display (and modify) the Axiom workspac= e >>>>>>>>>>>> at all >>>>>>>>>>>> >>>>>>>>>>>> times. >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> The only hesitation is that McClim uses quicklisp >>>>>>>>>>>> and drags in >>>>>>>>>>>> >>>>>>>>>>>> a >>>>>>>>>>>> >>>>>>>>>>>> lot >>>>>>>>>>>> >>>>>>>>>>>> of other subsystems. It's all lisp, of course. >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> These ideas aren't new. They were available on >>>>>>>>>>>> Symbolics >>>>>>>>>>>> >>>>>>>>>>>> machines, >>>>>>>>>>>> >>>>>>>>>>>> a truly productive platform and one I sorely miss= . >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> Tim >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> On 1/19/21, Tim Daly wrote: >>>>>>>>>>>> >>>>>>>>>>>>> Also of interest is the talk >>>>>>>>>>>> >>>>>>>>>>>>> "The Unreasonable Effectiveness of Dynamic Typin= g >>>>>>>>>>>> for >>>>>>>>>>>> >>>>>>>>>>>>> Practical >>>>>>>>>>>> >>>>>>>>>>>>> Programs" >>>>>>>>>>>> >>>>>>>>>>>>> https://vimeo.com/74354480 >>>>>>>>>>>> >>>>>>>>>>>>> which questions whether static typing really has >>>>>>>>>>>> any benefit. >>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>>> Tim >>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>>> On 1/19/21, Tim Daly wrote: >>>>>>>>>>>> >>>>>>>>>>>>>> Peter Naur wrote an article of interest: >>>>>>>>>>>> >>>>>>>>>>>>>> http://pages.cs.wisc.edu/~remzi/Naur.pdf >>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>>>> In particular, it mirrors my notion that Axiom >>>>>>>>>>>> needs >>>>>>>>>>>> >>>>>>>>>>>>>> to embrace literate programming so that the >>>>>>>>>>>> "theory >>>>>>>>>>>> >>>>>>>>>>>>>> of the problem" is presented as well as the >>>>>>>>>>>> "theory >>>>>>>>>>>> >>>>>>>>>>>>>> of the solution". I quote the introduction: >>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>>>> This article is, to my mind, the most accurate >>>>>>>>>>>> account >>>>>>>>>>>> >>>>>>>>>>>>>> of what goes on in designing and coding a >>>>>>>>>>>> program. >>>>>>>>>>>> >>>>>>>>>>>>>> I refer to it regularly when discussing how muc= h >>>>>>>>>>>> >>>>>>>>>>>>>> documentation to create, how to pass along taci= t >>>>>>>>>>>> >>>>>>>>>>>>>> knowledge, and the value of the XP's >>>>>>>>>>>> metaphor-setting >>>>>>>>>>>> >>>>>>>>>>>>>> exercise. It also provides a way to examine a >>>>>>>>>>>> methodolgy's >>>>>>>>>>>> >>>>>>>>>>>>>> economic structure. >>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>>>> In the article, which follows, note that the >>>>>>>>>>>> quality of the >>>>>>>>>>>> >>>>>>>>>>>>>> designing programmer's work is related to the >>>>>>>>>>>> quality of >>>>>>>>>>>> >>>>>>>>>>>>>> the match between his theory of the problem and >>>>>>>>>>>> his theory >>>>>>>>>>>> >>>>>>>>>>>>>> of the solution. Note that the quality of a lat= er >>>>>>>>>>>> >>>>>>>>>>>>>> programmer's >>>>>>>>>>>> >>>>>>>>>>>>>> work is related to the match between his >>>>>>>>>>>> theories and the >>>>>>>>>>>> >>>>>>>>>>>>>> previous programmer's theories. >>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>>>> Using Naur's ideas, the designer's job is not t= o >>>>>>>>>>>> pass along >>>>>>>>>>>> >>>>>>>>>>>>>> "the design" but to pass along "the theories" >>>>>>>>>>>> driving the >>>>>>>>>>>> >>>>>>>>>>>>>> design. >>>>>>>>>>>> >>>>>>>>>>>>>> The latter goal is more useful and more >>>>>>>>>>>> appropriate. It also >>>>>>>>>>>> >>>>>>>>>>>>>> highlights that knowledge of the theory is taci= t >>>>>>>>>>>> in the >>>>>>>>>>>> >>>>>>>>>>>>>> owning, >>>>>>>>>>>> >>>>>>>>>>>>>> and >>>>>>>>>>>> >>>>>>>>>>>>>> so passing along the thoery requires passing >>>>>>>>>>>> along both >>>>>>>>>>>> >>>>>>>>>>>>>> explicit >>>>>>>>>>>> >>>>>>>>>>>>>> and tacit knowledge. >>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>>>> Tim >>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>> >>>>>>>>>>>> >>>>>>>>> >>>>>>>>>>>> >>>>>>>> >>>>>>>>>>>> >>>>>>> >>>>>>>>>>>> >>>>>> >>>>>>>>>>>> >>>>> >>>>>>>>>>>> >>>> >>>>>>>>>>>> >>> >>>>>>>>>>>> >> >>>>>>>>>>>> > >>>>>>>>>>>> >>>>>>>>>>> --00000000000078965305d9f2670c Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
The github lockout continues...

I'm spending some time adding examples to source code.

Any function can have ++X comments added. These will
appear as examples when the function is )display For example,
in PermutationGroup there is a function 'strongGenerators'
=
defined as:

=C2=A0 strongGenerators : % ->= L PERM S
=C2=A0=C2=A0=C2=A0 ++ strongGenerators(gp) returns stro= ng generators for
=C2=A0=C2=A0=C2=A0 ++ the group gp.
= =C2=A0=C2=A0=C2=A0 ++
=C2=A0=C2=A0=C2=A0 ++X S:List(Integer) :=3D= [1,2,3,4]
=C2=A0=C2=A0=C2=A0 ++X G :=3D symmetricGroup(S)
<= div>=C2=A0=C2=A0=C2=A0 ++X strongGenerators(G)


Later, in the interpreter we see:

=



)d op strongGenerat= ors

=C2=A0 There is one exposed function called st= rongGenerators :
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 [1] PermutationGr= oup(D2) -> List(Permutation(D2)) from
=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 PermutationGro= up(D2)
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 if D2 has SETCAT

=C2=A0 Examples of strongGenerators from PermutationGroup
= =C2=A0
=C2=A0 S:List(Integer) :=3D [1,2,3,4]
=C2= =A0 G :=3D symmetricGroup(S)
=C2=A0 strongGenerators(G)




This will show= a working example for functions that the
user can copy and use. = It is especially useful to show how
to construct working argument= s.

These "example" functions are run at = build time when
the make command looks like
=C2=A0=C2= =A0=C2=A0 make TESTSET=3Dalltests

I hope to ad= d this documentation to all Axiom functions.

In ad= dition, the plan is to add these function calls to the
usual test= documentation. That means that all of these examples
will be run= and show their output in the final distribution
(mnt/ubuntu/doc/= src/input/*.dvi files) so the user can view
the expected output.<= br>

Tim



On Fri, Feb 25, 2022 at 6:05 PM Tim Daly <axiomcas@gmail.com> wrote:
It turns out that c= reating SPAD-looking output is trivial
in Common Lisp. Each = class can have a custom print
routine so signatures and ++ commen= ts can each be
printed with their own format.

To ensure that I maintain compatibility I'll be printing
the categories and domains so they look like SPAD code,
at leas= t until I get the proof technology integrated. I will
probably sp= ecialize the proof printers to look like the
original LEAN proof = syntax.

Internally, however, it will all be Common= Lisp.

Common Lisp makes so many desirable feature= s so easy.
It is possible to trace dynamically at any level. One = could
even write a trace that showed how Axiom arrived at the
solution. Any domain could have special case output syntax
without affecting any other domain so one could write a
tree-lik= e output for proofs. Using greek characters is trivial
so the inp= ut and output notation is more mathematical.

T= im


On Thu, Feb 24, 2022 at 10:24 AM Tim Daly <axiomcas@gmail.com>= ; wrote:
Axiom's SPAD code compiles to Common Lisp.
The= AKCL version of Common Lisp compiles to C.
Three languages and 2= compilers is a lot to maintain.
Further, there are very few peop= le able to write SPAD
and even fewer people able to maintain it.<= br>

I've decided that the SANE version of Axio= m will be
implemented in pure Common Lisp. I've outlined= Axiom's
category / type hierarchy in the Common Lisp Obj= ect
System (CLOS). I am now experimenting with re-writing
the functions into Common Lisp.

This will h= ave several long-term effects. It simplifies
the implementation i= ssues. SPAD code blocks a lot of
actions and optimizations that C= ommon Lisp provides.
The Common Lisp language has many more peopl= e
who can read, modify, and maintain code. It provides for
interoperability with other Common Lisp projects with
no = effort. Common Lisp is an international standard
which ensures th= at the code will continue to run.

The input / = output mathematics will remain the same.
Indeed, with the new gen= eralizations for first-class
dependent types it will be more gene= ral.

This is a big change, similar to eliminating = BOOT code
and moving to Literate Programming. This will provide a=
better platform for future research work. Current research
=
is focused on merging Axiom's computer algebra mathematics
with Lean's proof language. The goal is to create a system for
=
=C2=A0"computational mathematics".

<= div>Research is the whole point of Axiom.

Tim



On Sat, Jan 22, 2022 at 9:16 PM Tim Daly <= ;axiomcas@gmail.com= > wrote:
=
I can't stress enough how important it is to list= en to Hamming's talk

Axiom will begin to die the day I stop = working on it.

However, proving Axiom correct &quo= t;down to the metal", is fundamental.
It will merge computer= algebra and logic, spawning years of new
research.
Work on fundamental problems.

Tim


On Thu, Dec 30, 2021 at 6:46 PM Tim Daly <axiomcas@gmail.com> wrote:=
One of the interesting questions when obtaining a result
i= s "what functions were called and what was their return value?"
Otherwise known as the "show your work" idea.
=
There is an idea called the "writer monad" [0], us= ually
implemented to facilitate logging. We can exploit this=
idea to provide "show your work" capability. Each func= tion
can provide this information inside the monad enabling the
question to be answered at any time.

For = those unfamiliar with the monad idea, the best explanation
I'= ve found is this video [1].

Tim

=
[0] Deriving the writer monad from first principles
https://williamyaoh.com/posts/2020-07-26-deriving-= writer-monad.html

[1] The Absolute Best Intro = to Monads for Software Engineers

On Mon, Dec 13, 2021 at 12:30 AM Tim Daly <axiomcas@gmail.com&g= t; wrote:
...(snip)...

Common Lisp has a= n "open compiler". That allows the ability
to deeply mo= dify compiler behavior using compiler macros
and macros in genera= l. CLOS takes advantage of this to add
typed behavior into the co= mpiler in a way that ALLOWS strict
typing such as found in constr= uctive type theory and ML.
Judgments, ala Crary, are front-and-ce= nter.

Whether you USE the discipline afforded = is the real question.

Indeed, the Axiom research s= truggle is essentially one of how
to have a disciplined use of fi= rst-class dependent types. The
struggle raises issues of, for exa= mple, compiling a dependent
type whose argument is recursive in t= he compiled type. Since
the new type is first-class it can be con= structed at what you
improperly call "run-time". Howeve= r, it appears that the recursive
type may have to call the compil= er at each recursion to generate
the next step since in some case= s it cannot generate "closed code".

= I am embedding proofs (in LEAN language) into the type
hierarchy = so that theorems, which depend on the type hierarchy,
are cor= rectly inherited. The compiler has to check the proofs of functions
at compile time using these. Hacking up nonsense just won't cut it. = Think
of the problem of embedding LEAN proofs in ML or ML in LEAN= .
(Actually, Jeremy Avigad might find that research interesting.)=

So Matrix(3,3,Float) has inverses (assuming Float= is a
field (cough)). The type inherits this theorem and proofs o= f
functions can use this. But Matrix(3,4,Integer) does not have
inverses so the proofs cannot use this. The type hierarchy has
to ensure that the proper theorems get inherited.

=
Making proof technology work at compile time is hard.
= (Worse yet, LEAN is a moving target. Sigh.)



On = Thu, Nov 25, 2021 at 9:43 AM Tim Daly <axiomcas@gmail.com> wrote:
=

As you know I've been re-architect= ing Axiom to use first class
dependent types and proving the algo= rithms correct. For example,
the GCD of natural numbers or the GC= D of polynomials.

The idea involves "boxing u= p" the proof with the algorithm (aka
proof carrying code) in= the ELF file (under a crypto hash so it
can't be changed).

Once the code is running on the CPU, the proof is r= un in parallel
on the field programmable gate array (FPGA). Intel= data center
servers have CPUs with built-in FPGAs these days.

There is a bit of a disconnect, though. The GCD code= is compiled
machine code but the proof is LEAN-level.
=
What would be ideal is if the compiler not only compiled the= GCD
code to machine code, it also compiled the proof to "ma= chine code".
That is, for each machine instruction, the FPGA= proof checker
would ensure that the proof was not violated at th= e individual
instruction level.

What= does it mean to "compile a proof to the machine code level"?

The Milawa effort (Myre14.pdf) does incremental proof= s in layers.
To quote from the article [0]:

=C2=A0=C2=A0 We begin with a simple proof checker, call it A, which= is short
=C2=A0=C2=A0 enough to verify by the ``social process&#= 39;' of mathematics -- and
=C2=A0 more recently with a theore= m prover for a more expressive logic.

=C2=A0=C2=A0= We then develop a series of increasingly powerful proof checkers,
=C2=A0 call the B, C, D, and so on. We show each of these programs only
=C2=A0=C2=A0 accepts the same formulas as A, using A to verify B, = and B to verify
=C2=A0=C2=A0 C, and so on. Then, since we trust A= , and A says B is trustworthy, we
=C2=A0=C2=A0 can trust B. Then,= since we trust B, and B says C is trustworthy, we
=C2=A0=C2=A0 c= an trust C.

This gives a technique for "= compiling the proof" down the the machine
code level. Ideall= y, the compiler would have judgments for each step of
the compila= tion so that each compile step has a justification. I don't
k= now of any compiler that does this yet. (References welcome).

At the machine code level, there are techniques that w= ould allow
the FPGA proof to "step in sequence" with th= e executing code.
Some work has been done on using "Hoar= e Logic for Realistically
Modelled Machine Code" (paper atta= ched, Myre07a.pdf),
"Decompilation into Logic -- Improved (M= yre12a.pdf).

So the game is to construct a GCD ove= r some type (Nats, Polys, etc.
Axiom has 22), compile the depende= nt type GCD to machine code.
In parallel, the proof of the code i= s compiled to machine code. The
pair is sent to the CPU/FPGA and,= while the algorithm runs, the FPGA
ensures the proof is not viol= ated, instruction by instruction.

(I'm ignorin= g machine architecture issues such pipelining, out-of-order,
bran= ch prediction, and other machine-level things to ponder. I'm looking
at the RISC-V Verilog details by various people to understand bette= r but
it is still a "misty fog" for me.)
=

On Thu, Nov 25, 2021 at 6:05 AM Tim Daly &l= t;axiomcas@gmail.co= m> wrote:

As you know I've been re-ar= chitecting Axiom to use first class
dependent types and proving t= he algorithms correct. For example,
the GCD of natural numbers or= the GCD of polynomials.

The idea involves "b= oxing up" the proof with the algorithm (aka
proof carrying c= ode) in the ELF file (under a crypto hash so it
can't be chan= ged).

Once the code is running on the CPU, the pro= of is run in parallel
on the field programmable gate array (FPGA)= . Intel data center
servers have CPUs with built-in FPGAs these d= ays.

There is a bit of a disconnect, though. The G= CD code is compiled
machine code but the proof is LEAN-level.

What would be ideal is if the compiler not only compi= led the GCD
code to machine code, it also compiled the proof to &= quot;machine code".
That is, for each machine instruction, t= he FPGA proof checker
would ensure that the proof was not violate= d at the individual
instruction level.

What does it mean to "compile a proof to the machine code level&quo= t;?

The Milawa effort (Myre14.pdf) does incrementa= l proofs in layers.
To quote from the article [0]:
=
=C2=A0=C2=A0 We begin with a simple proof checker, call it A= , which is short
=C2=A0=C2=A0 enough to verify by the ``social pr= ocess'' of mathematics -- and
=C2=A0 more recently with a= theorem prover for a more expressive logic.

=C2= =A0=C2=A0 We then develop a series of increasingly powerful proof checkers,=
=C2=A0 call the B, C, D, and so on. We show each of these progra= ms only
=C2=A0=C2=A0 accepts the same formulas as A, using A to v= erify B, and B to verify
=C2=A0=C2=A0 C, and so on. Then, since w= e trust A, and A says B is trustworthy, we
=C2=A0=C2=A0 can trust= B. Then, since we trust B, and B says C is trustworthy, we
=C2= =A0=C2=A0 can trust C.

This gives a technique= for "compiling the proof" down the the machine
code le= vel. Ideally, the compiler would have judgments for each step of
= the compilation so that each compile step has a justification. I don't<= /div>
know of any compiler that does this yet. (References welcome).

At the machine code level, there are techni= ques that would allow
the FPGA proof to "step in sequence&qu= ot; with the executing code.
Some work has been done on using= "Hoare Logic for Realistically
Modelled Machine Code" = (paper attached, Myre07a.pdf),
"Decompilation into Logic -- = Improved (Myre12a.pdf).

So the game is to construc= t a GCD over some type (Nats, Polys, etc.
Axiom has 22), compile = the dependent type GCD to machine code.
In parallel, the proof of= the code is compiled to machine code. The
pair is sent to the CP= U/FPGA and, while the algorithm runs, the FPGA
ensures the proof = is not violated, instruction by instruction.

(I= 9;m ignoring machine architecture issues such pipelining, out-of-order,
branch prediction, and other machine-level things to ponder. I'm= looking
at the RISC-V Verilog details by various people to under= stand better but
it is still a "misty fog" for me.)
=

The result is proven code "down to the metal= ".

Tim




<= div dir=3D"ltr" class=3D"gmail_attr">On Sat, Nov 13, 2021 at 5:28 PM Tim Da= ly <axiomcas@gma= il.com> wrote:
Full support for general, first-class dependent= types requires
some changes to the Axiom design. That implies so= me language
design questions.

Given that= mathematics is such a general subject with a lot of
"local&= quot; notation and ideas (witness logical judgment notation)
care= ful thought is needed to design a language that is able to
handle= a wide range.

Normally language design is a two-l= evel process. The language
designer creates a language and then a= n implementation. Various
design choices affect the final languag= e.

There is "The Metaobject Protocol"= ; (MOP)
which= encourages a three-level process. The language designer
wor= ks at a Metalevel to design a family of languages, then the
langu= age specializations, then the implementation. A MOP design
allows= the language user to optimize the language to their problem.
A simple paper on the subject is "Metaobject Protocols&quo= t;
Tim


On Mon, Oct 25, 2021 at 7:42 PM Tim Dal= y <axiomcas@gmai= l.com> wrote:
I have a separate thread of research on Self-Rep= licating Systems
(ref: Kinematics of Self Reproducing Machines

which led to watching "Strange Dreams of Stranger Loops" = by Will Byrd
https://www.youtube.com/watch?v=3DAffW-7ika0E

Will referenced a PhD Thesis by Jon Doyle
"A Model for Deliberation, Action, and Introspection"
=
I also read the thesis by J.C.G. Sturdy
"A Li= sp through the Looking Glass"

Self-replicatio= n requires the ability to manipulate your own
representation in s= uch a way that changes to that representation
will change behavio= r.

This leads to two thoughts in the SANE research= .

First, "Declarative Representation". T= hat is, most of the things
about the representation should be dec= larative rather than
procedural. Applying this idea as much as po= ssible makes it
easier to understand and manipulate.

Second, "Explicit Call Stack". Function calls fo= rm an implicit
call stack. This can usually be displayed in a run= ning lisp system.
However, having the call stack explicitly avail= able would mean
that a system could "introspect" at the= first-class level.

These two ideas would make it = easy, for example, to let the
system "show the work". O= ne of the normal complaints is that
a system presents an answer b= ut there is no way to know how
that answer was derived. These two= ideas make it possible to
understand, display, and even post-ans= wer manipulate
the intermediate steps.

H= aving the intermediate steps also allows proofs to be
inserted in= a step-by-step fashion. This aids the effort to
have proofs run = in parallel with computation at the hardware
level.

Tim



=


On Thu, Oct 21, 2021 at 9:50 AM Tim Daly <axiomcas@gmail.com&= gt; wrote:
So the current struggle involves the categories in Axiom.<= /div>

The categories and domains constructed using categ= ories
are dependent types. When are dependent types "equal&q= uot;?
Well, hummmm, that depends on the arguments to the
constructor.

But in order to decide(?) equality = we have to evaluate
the arguments (which themselves can be depend= ent types).
Indeed, we may, and in general, we must evaluate the =
arguments at compile time (well, "construction time&quo= t; as
there isn't really a compiler / interpreter separation = anymore.)

That raises the question of what &qu= ot;equality" means. This
is not simply a "set equality&= quot; relation. It falls into the
infinite-groupoid of homotopy t= ype theory. In general
it appears that deciding category / domain= equivalence
might force us to climb the type hierarchy.

Beyond that, there is the question of "which proof&qu= ot;
applies to the resulting object. Proofs depend on their
=
assumptions which might be different for different
construct= ions. As yet I have no clue how to "index"
proofs based= on their assumptions, nor how to
connect these assumptions = to the groupoid structure.

My brain hurts.

Tim


On Mon, Oct 18, 2021 at 2:00 AM T= im Daly <axiomca= s@gmail.com> wrote:
"Birthing Computational Mathematics&q= uot;

The Axiom SANE project is difficult at a very= fundamental
level. The title "SANE" was chosen due to = the various
words found in a thesuarus... "rational", &= quot;coherent",
"judicious" and "sound".=

These are very high level, amorphous ideas. But s= o is
the design of SANE. Breaking away from tradition in
computer algebra, type theory, and proof assistants
is very dif= ficult. Ideas tend to fall into standard jargon
which limits both= the frame of thinking (e.g. dependent
types) and the content (e.= g. notation).

Questioning both frame and content i= s very difficult.
It is hard to even recognize when they are acce= pted
"by default" rather than "by choice". Wh= at does the idea
"power tools" mean in a primitive,= hand labor culture?

Christopher Alexander [0]= addresses this problem in
a lot of his writing. Specifically, in= his book "Notes on
the Synthesis of Form", in his chap= ter 5 "The Selfconsious
Process", he addresses this pro= blem directly. This is a
"must read" book.

Unlike building design and contruction, however, there
are almost no constraints to use as guides. Alexander
quot= es Plato's Phaedrus:

=C2=A0 "First, the t= aking in of scattered particulars under
=C2=A0=C2=A0 one Idea, so= that everyone understands what is being
=C2=A0=C2=A0 talked abou= t ... Second, the separation of the Idea
=C2=A0=C2=A0 into parts,= by dividing it at the joints, as nature
=C2=A0=C2=A0 directs, no= t breaking any limb in half as a bad
=C2=A0=C2=A0 carver mig= ht."

Lisp, which has been called "cl= ay for the mind" can
build virtually anything that can be th= ought. The
"joints" are also "of one's ch= oosing" so one is
both carver and "nature".

Clearly the problem is no longer "the tools&quo= t;.
*I* am the problem constraining the solution.
Birth= ing this "new thing" is slow, difficult, and
uncertain = at best.

Tim

[0] Alexande= r, Christopher "Notes on the Synthesis
of Form" Harvard= University Press 1964
ISBN 0-674-62751-2


On Sun, Oct 10, 2021 at 4:40 PM Tim Daly <axiomcas@gmail.com> wrote:
Re: writing a paper... I'= m not connected to Academia
so anything I'd write would never make it into print.

"Language level parsing" is still a long way off. The talk
by Guy Steele [2] highlights some of the problems we
currently face using mathematical metanotation.

For example, a professor I know at CCNY (City College
of New York) didn't understand Platzer's "funny
fraction notation" (proof judgements) despite being
an expert in Platzer's differential equations area.

Notation matters and is not widely common.

I spoke to Professor Black (in LTI) about using natural
language in the limited task of a human-robot cooperation
in changing a car tire.=C2=A0 I looked at the current machine
learning efforts. They are no where near anything but
toy systems, taking too long to train and are too fragile.

Instead I ended up using a combination of AIML [3]
(Artificial Intelligence Markup Language), the ALICE
Chatbot [4], Forgy's OPS5 rule based program [5],
and Fahlman's SCONE [6] knowledge base. It was
much less fragile in my limited domain problem.

I have no idea how to extend any system to deal with
even undergraduate mathematics parsing.

Nor do I have any idea how I would embed LEAN
knowledge into a SCONE database, although I
think the combination would be useful and interesting.

I do believe that, in the limited area of computational
mathematics, we are capable of building robust, proven
systems that are quite general and extensible. As you
might have guessed I've given it a lot of thought over
the years :-)

A mathematical language seems to need >6 components

1) We need some sort of a specification language, possibly
somewhat 'propositional' that introduces the assumptions
you mentioned (ref. your discussion of numbers being
abstract and ref. your discussion of relevant choice of
assumptions related to a problem).

This is starting to show up in the hardware area (e.g.
Lamport's TLC[0])

Of course, specifications relate to proving programs
and, as you recall, I got a cold reception from the
LEAN community about using LEAN for program proofs.

2) We need "scaffolding". That is, we need a theory
that can be reduced to some implementable form
that provides concept-level structure.

Axiom uses group theory for this. Axiom's "category"
structure has "Category" things like Ring. Claiming
to be a Ring brings in a lot of "Signatures" of functions
you have to implement to properly be a Ring.

Scaffolding provides a firm mathematical basis for
design. It provides a link between the concept of a
Ring and the expectations you can assume when
you claim your "Domain" "is a Ring". Category
theory might provide similar structural scaffolding
(eventually... I'm still working on that thought garden)

LEAN ought to have a textbook(s?) that structures
the world around some form of mathematics. It isn't
sufficient to say "undergraduate math" is the goal.
There needs to be some coherent organization so
people can bring ideas like Group Theory to the
organization. Which brings me to ...

3) We need "spreading". That is, we need to take
the various definitions and theorems in LEAN and
place them in their proper place in the scaffold.

For example, the Ring category needs the definitions
and theorems for a Ring included in the code for the
Ring category. Similarly, the Commutative category
needs the definitions and theorems that underlie
"commutative" included in the code.

That way, when you claim to be a "Commutative Ring"
you get both sets of definitions and theorems. That is,
the inheritance mechanism will collect up all of the
definitions and theorems and make them available
for proofs.

I am looking at LEAN's definitions and theorems with
an eye to "spreading" them into the group scaffold of
Axiom.

4) We need "carriers" (Axiom calls them representations,
aka "REP"). REPs allow data structures to be defined
independent of the implementation.

For example, Axiom can construct Polynomials that
have their coefficients in various forms of representation.
You can define "dense" (all coefficients in a list),
"sparse" (only non-zero coefficients), "recursive", etc= .

A "dense polynomial" and a "sparse polynomial" work
exactly the same way as far as the user is concerned.
They both implement the same set of functions. There
is only a difference of representation for efficiency and
this only affects the implementation of the functions,
not their use.

Axiom "got this wrong" because it didn't sufficiently
separate the REP from the "Domain". I plan to fix this.

LEAN ought to have a "data structures" subtree that
has all of the definitions and axioms for all of the
existing data structures (e.g. Red-Black trees). This
would be a good undergraduate project.

5) We need "Domains" (in Axiom speak). That is, we
need a box that holds all of the functions that implement
a "Domain". For example, a "Polynomial Domain" would hold all of the functions for manipulating polynomials
(e.g polynomial multiplication). The "Domain" box
is a dependent type that:

=C2=A0 A) has an argument list of "Categories" that this "Do= main"
=C2=A0 =C2=A0 =C2=A0 box inherits. Thus, the "Integer Domain" inh= erits
=C2=A0 =C2=A0 =C2=A0 the definitions and axioms from "Commutative"= ;

=C2=A0 =C2=A0 =C2=A0Functions in the "Domain" box can now assume<= br> =C2=A0 =C2=A0 =C2=A0and use the properties of being commutative. Proofs
=C2=A0 =C2=A0 =C2=A0of functions in this domain can use the definitions
=C2=A0 =C2=A0 =C2=A0and proofs about being commutative.

=C2=A0 B) contains an argument that specifies the "REP"
=C2=A0 =C2=A0 =C2=A0 =C2=A0(aka, the carrier). That way you get all of the<= br> =C2=A0 =C2=A0 =C2=A0 =C2=A0functions associated with the data structure
=C2=A0 =C2=A0 =C2=A0 available for use in the implementation.

=C2=A0 =C2=A0 =C2=A0 Functions in the Domain box can use all of
=C2=A0 =C2=A0 =C2=A0 the definitions and axioms about the representation =C2=A0 =C2=A0 =C2=A0 (e.g. NonNegativeIntegers are always positive)

=C2=A0 C) contains local "spread" definitions and axioms
=C2=A0 =C2=A0 =C2=A0 =C2=A0that can be used in function proofs.

=C2=A0 =C2=A0 =C2=A0 For example, a "Square Matrix" domain would<= br> =C2=A0 =C2=A0 =C2=A0 have local axioms that state that the matrix is
=C2=A0 =C2=A0 =C2=A0 always square. Thus, functions in that box could
=C2=A0 =C2=A0 =C2=A0 use these additional definitions and axioms in
=C2=A0 =C2=A0 =C2=A0 function proofs.

=C2=A0 D) contains local state. A "Square Matrix" domain
=C2=A0 =C2=A0 =C2=A0 =C2=A0would be constructed as a dependent type that =C2=A0 =C2=A0 =C2=A0 =C2=A0specified the size of the square (e.g. a 2x2
=C2=A0 =C2=A0 =C2=A0 =C2=A0matrix would have '2' as a dependent par= ameter.

=C2=A0 E) contains implementations of inherited functions.

=C2=A0 =C2=A0 =C2=A0 =C2=A0A "Category" could have a signature fo= r a GCD
=C2=A0 =C2=A0 =C2=A0 =C2=A0function and the "Category" could have= a default
=C2=A0 =C2=A0 =C2=A0 =C2=A0implementation. However, the "Domain" = could
=C2=A0 =C2=A0 =C2=A0 =C2=A0have a locally more efficient implementation whi= ch
=C2=A0 =C2=A0 =C2=A0 =C2=A0overrides the inherited implementation.

=C2=A0 =C2=A0 =C2=A0 Axiom has about 20 GCD implementations that
=C2=A0 =C2=A0 =C2=A0 differ locally from the default in the category. They<= br> =C2=A0 =C2=A0 =C2=A0 use properties known locally to be more efficient.

=C2=A0 F) contains local function signatures.

=C2=A0 =C2=A0 =C2=A0 A "Domain" gives the user more and more uniq= ue
=C2=A0 =C2=A0 =C2=A0 functions. The signature have associated
=C2=A0 =C2=A0 =C2=A0 "pre- and post- conditions" that can be used=
=C2=A0 =C2=A0 =C2=A0 as assumptions in the function proofs.

=C2=A0 =C2=A0 =C2=A0 Some of the user-available functions are only
=C2=A0 =C2=A0 =C2=A0 visible if the dependent type would allow them
=C2=A0 =C2=A0 =C2=A0 to exist. For example, a general Matrix domain
=C2=A0 =C2=A0 =C2=A0 would have fewer user functions that a Square
=C2=A0 =C2=A0 =C2=A0 Matrix domain.

=C2=A0 =C2=A0 =C2=A0 In addition, local "helper" functions need t= heir
=C2=A0 =C2=A0 =C2=A0 own signatures that are not user visible.

=C2=A0 G) the function implementation for each signature.

=C2=A0 =C2=A0 =C2=A0 =C2=A0This is obviously where all the magic happens
=C2=A0 H) the proof of each function.

=C2=A0 =C2=A0 =C2=A0 =C2=A0This is where I'm using LEAN.

=C2=A0 =C2=A0 =C2=A0 =C2=A0Every function has a proof. That proof can use =C2=A0 =C2=A0 =C2=A0 =C2=A0all of the definitions and axioms inherited from=
=C2=A0 =C2=A0 =C2=A0 =C2=A0the "Category", "Representation&q= uot;, the "Domain
=C2=A0 =C2=A0 =C2=A0 =C2=A0Local", and the signature pre- and post- =C2=A0 =C2=A0 =C2=A0 =C2=A0conditions.

=C2=A0 =C2=A0I) literature links. Algorithms must contain a link
=C2=A0 =C2=A0 =C2=A0 to at least one literature reference. Of course,
=C2=A0 =C2=A0 =C2=A0 since everything I do is a Literate Program
=C2=A0 =C2=A0 =C2=A0 this is obviously required. Knuth said so :-)


LEAN ought to have "books" or "pamphlets" that
bring together all of this information for a domain
such as Square Matrices. That way a user can
find all of the related ideas, available functions,
and their corresponding proofs in one place.

6) User level presentation.

=C2=A0 =C2=A0 This is where the systems can differ significantly.
=C2=A0 =C2=A0 Axiom and LEAN both have GCD but they use
=C2=A0 =C2=A0 that for different purposes.

=C2=A0 =C2=A0 I'm trying to connect LEAN's GCD and Axiom's GCD<= br> =C2=A0 =C2=A0 so there is a "computational mathematics" idea that=
=C2=A0 =C2=A0 allows the user to connect proofs and implementations.

7) Trust

Unlike everything else, computational mathematics
can have proven code that gives various guarantees.

I have been working on this aspect for a while.
I refer to it as trust "down to the metal" The idea is
that a proof of the GCD function and the implementation
of the GCD function get packaged into the ELF format.
(proof carrying code). When the GCD algorithm executes
on the CPU, the GCD proof is run through the LEAN
proof checker on an FPGA in parallel.

(I just recently got a PYNQ Xilinx board [1] with a CPU
and FPGA together. I'm trying to implement the LEAN
proof checker on the FPGA).

We are on the cusp of a revolution in computational
mathematics. But the two pillars (proof and computer
algebra) need to get know each other.

Tim



[0] Lamport, Leslie "Chapter on TLA+"
in "Software Specification Methods"
https://www.springer.com/gp/book/9781852333539
(I no longer have CMU library access or I'd send you
the book PDF)

[1] https://www.tul.com.tw/productspynq-z2.html

[2] https://www.youtube.com/watch?v=3DdCuZkaaou0Q
[3] "ARTIFICIAL INTELLIGENCE MARKUP LANGUAGE"
https://arxiv.org/pdf/1307.3091.pdf

[4] ALICE Chatbot
http://www.scielo.org.mx/pdf/cys= /v19n4/1405-5546-cys-19-04-00625.pdf

[5] OPS5 User Manual
https://kilthub.cm= u.edu/articles/journal_contribution/OPS5_user_s_manual/6608090/1

[6] Scott Fahlman "SCONE"
http://www.cs.cmu.edu/~sef/scone/

On 9/27/21, Tim Daly <axiomcas@gmail.com> wrote:
> I have tried to maintain a list of names of people who have
> helped Axiom, going all the way back to the pre-Scratchpad
> days. The names are listed at the beginning of each book.
> I also maintain a bibliography of publications I've read or
> that have had an indirect influence on Axiom.
>
> Credit is "the coin of the realm". It is easy to share and w= rong
> to ignore. It is especially damaging to those in Academia who
> are affected by credit and citations in publications.
>
> Apparently I'm not the only person who feels that way. The ACM
> Turing award seems to have ignored a lot of work:
>
> Scientific Integrity, the 2021 Turing Lecture, and the 2018 Turing
> Award for Deep Learning
> https://pe= ople.idsia.ch/~juergen/scientific-integrity-turing-award-deep-learning.html=
>
> I worked on an AI problem at IBM Research called Ketazolam.
> (https://en.wikipedia.org/wiki/Ketazolam). The idea = was to recognize
> and associated 3D chemical drawings with their drug counterparts.
> I used Rumelhart, and McClelland's books. These books contained > quite a few ideas that seem to be "new and innovative" among= the
> machine learning crowd... but the books are from 1987. I don't bel= ieve
> I've seen these books mentioned in any recent bibliography.
> https://mitpress.mit.edu= /books/parallel-distributed-processing-volume-1
>
>
>
>
> On 9/27/21, Tim Daly <axiomcas@gmail.com> wrote:
>> Greg Wilson asked "How Reliable is Scientific Software?"=
>> https://ne= verworkintheory.org/2021/09/25/how-reliable-is-scientific-software.html=
>>
>> which is a really interesting read. For example"
>>
>>=C2=A0 [Hatton1994], is now a quarter of a century old, but its con= clusions
>> are still fresh. The authors fed the same data into nine commercia= l
>> geophysical software packages and compared the results; they found=
>> that, "numerical disagreement grows at around the rate of 1% = in
>> average absolute difference per 4000 fines of implemented code, an= d,
>> even worse, the nature of the disagreement is nonrandom" (i.e= ., the
>> authors of different packages make similar mistakes).
>>
>>
>> On 9/26/21, Tim Daly <axiomcas@gmail.com> wrote:
>>> I should note that the lastest board I've just unboxed
>>> (a PYNQ-Z2) is a Zynq Z-7020 chip from Xilinx (AMD).
>>>
>>> What makes it interesting is that it contains 2 hard
>>> core processors and an FPGA, connected by 9 paths
>>> for communication. The processors can be run
>>> independently so there is the possibility of a parallel
>>> version of some Axiom algorithms (assuming I had
>>> the time, which I don't).
>>>
>>> Previously either the hard (physical) processor was
>>> separate from the FPGA with minimal communication
>>> or the soft core processor had to be created in the FPGA
>>> and was much slower.
>>>
>>> Now the two have been combined in a single chip.
>>> That means that my effort to run a proof checker on
>>> the FPGA and the algorithm on the CPU just got to
>>> the point where coordination is much easier.
>>>
>>> Now all I have to do is figure out how to program this
>>> beast.
>>>
>>> There is no such thing as a simple job.
>>>
>>> Tim
>>>
>>>
>>> On 9/26/21, Tim Daly <axiomcas@gmail.com> wrote:
>>>> I'm familiar with most of the traditional approaches >>>> like Theorema. The bibliography contains most of the
>>>> more interesting sources. [0]
>>>>
>>>> There is a difference between traditional approaches to >>>> connecting computer algebra and proofs and my approach. >>>>
>>>> Proving an algorithm, like the GCD, in Axiom is hard.
>>>> There are many GCDs (e.g. NNI vs POLY) and there
>>>> are theorems and proofs passed at runtime in the
>>>> arguments of the newly constructed domains. This
>>>> involves a lot of dependent type theory and issues of
>>>> compile time / runtime argument evaluation. The issues
>>>> that arise are difficult and still being debated in the ty= pe
>>>> theory community.
>>>>
>>>> I am putting the definitions, theorems, and proofs (DTP) >>>> directly into the category/domain hierarchy. Each category=
>>>> will have the DTP specific to it. That way a commutative >>>> domain will inherit a commutative theorem and a
>>>> non-commutative domain will not.
>>>>
>>>> Each domain will have additional DTPs associated with
>>>> the domain (e.g. NNI vs Integer) as well as any DTPs
>>>> it inherits from the category hierarchy. Functions in the<= br> >>>> domain will have associated DTPs.
>>>>
>>>> A function to be proven will then inherit all of the relev= ant
>>>> DTPs. The proof will be attached to the function and
>>>> both will be sent to the hardware (proof-carrying code). >>>>
>>>> The proof checker, running on a field programmable
>>>> gate array (FPGA), will be checked at runtime in
>>>> parallel with the algorithm running on the CPU
>>>> (aka "trust down to the metal"). (Note that Inte= l
>>>> and AMD have built CPU/FPGA combined chips,
>>>> currently only available in the cloud.)
>>>>
>>>>
>>>>
>>>> I am (slowly) making progress on the research.
>>>>
>>>> I have the hardware and nearly have the proof
>>>> checker from LEAN running on my FPGA.
>>>>
>>>> I'm in the process of spreading the DTPs from
>>>> LEAN across the category/domain hierarchy.
>>>>
>>>> The current Axiom build extracts all of the functions
>>>> but does not yet have the DTPs.
>>>>
>>>> I have to restructure the system, including the compiler >>>> and interpreter to parse and inherit the DTPs. I
>>>> have some of that code but only some of the code
>>>> has been pushed to the repository (volume 15) but
>>>> that is rather trivial, out of date, and incomplete.
>>>>
>>>> I'm clearly not smart enough to prove the Risch
>>>> algorithm and its associated machinery but the needed
>>>> definitions and theorems will be available to someone
>>>> who wants to try.
>>>>
>>>> [0] https://github.com/daly/= PDFS/blob/master/bookvolbib.pdf
>>>>
>>>>
>>>> On 8/19/21, Tim Daly <axiomcas@gmail.com> wrote:
>>>>> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
>>>>>
>>>>> REVIEW (Axiom on WSL2 Windows)
>>>>>
>>>>>
>>>>> So the steps to run Axiom from a Windows desktop
>>>>>
>>>>> 1 Windows) install XMing on Windows for X11 server
>>>>>
>>>>> http://www.straightrunning.com/XmingN= otes/
>>>>>
>>>>> 2 WSL2) Install Axiom in WSL2
>>>>>
>>>>> sudo apt install axiom
>>>>>
>>>>> 3 WSL2) modify /usr/bin/axiom to fix the bug:
>>>>> (someone changed the axiom startup script.
>>>>> It won't work on WSL2. I don't know who or
>>>>> how to get it fixed).
>>>>>
>>>>> sudo emacs /usr/bin/axiom
>>>>>
>>>>> (split the line into 3 and add quote marks)
>>>>>
>>>>> export SPADDEFAULT=3D/usr/local/axiom/mnt/linux
>>>>> export AXIOM=3D/usr/lib/axiom-20170501
>>>>> export "PATH=3D/usr/lib/axiom-20170501/bin:$PATH&= quot;
>>>>>
>>>>> 4 WSL2) create a .axiom.input file to include startup = cmds:
>>>>>
>>>>> emacs .axiom.input
>>>>>
>>>>> )cd "/mnt/c/yourpath"
>>>>> )sys pwd
>>>>>
>>>>> 5 WSL2) create a "myaxiom" command that sets= the
>>>>>=C2=A0 =C2=A0 =C2=A0DISPLAY variable and starts axiom >>>>>
>>>>> emacs myaxiom
>>>>>
>>>>> #! /bin/bash
>>>>> export DISPLAY=3D:0.0
>>>>> axiom
>>>>>
>>>>> 6 WSL2) put it in the /usr/bin directory
>>>>>
>>>>> chmod +x myaxiom
>>>>> sudo cp myaxiom /usr/bin/myaxiom
>>>>>
>>>>> 7 WINDOWS) start the X11 server
>>>>>
>>>>> (XMing XLaunch Icon on your desktop)
>>>>>
>>>>> 8 WINDOWS) run myaxiom from PowerShell
>>>>> (this should start axiom with graphics available)
>>>>>
>>>>> wsl myaxiom
>>>>>
>>>>> 8 WINDOWS) make a PowerShell desktop
>>>>>
>>>>> https://superuser.com/questions/886951/run-powershell-script-when-you= -open-powershell
>>>>>
>>>>> Tim
>>>>>
>>>>> On 8/13/21, Tim Daly <axiomcas@gmail.com> wrote:
>>>>>> A great deal of thought is directed toward making = the SANE version
>>>>>> of Axiom as flexible as possible, decoupling mecha= nism from theory.
>>>>>>
>>>>>> An interesting publication by Brian Cantwell Smith= [0], "Reflection
>>>>>> and Semantics in LISP" seems to contain inter= esting ideas related
>>>>>> to our goal. Of particular interest is the ability= to reason about
>>>>>> and
>>>>>> perform self-referential manipulations. In a depen= dently-typed
>>>>>> system it seems interesting to be able "adapt= " code to handle
>>>>>> run-time computed arguments to dependent functions= . The abstract:
>>>>>>
>>>>>>=C2=A0 =C2=A0 "We show how a computational sys= tem can be constructed to
>>>>>> "reason",
>>>>>> effectively
>>>>>>=C2=A0 =C2=A0 and consequentially, about its own in= ferential processes. The
>>>>>> analysis proceeds in two
>>>>>>=C2=A0 =C2=A0 parts. First, we consider the general= question of computational
>>>>>> semantics, rejecting
>>>>>>=C2=A0 =C2=A0 traditional approaches, and arguing t= hat the declarative and
>>>>>> procedural aspects of
>>>>>>=C2=A0 =C2=A0 computational symbols (what they stan= d for, and what behaviour
>>>>>> they
>>>>>> engender) should be
>>>>>>=C2=A0 =C2=A0 analysed independently, in order that= they may be coherently
>>>>>> related. Second, we
>>>>>>=C2=A0 =C2=A0 investigate self-referential behavior= in computational processes,
>>>>>> and show how to embed an
>>>>>>=C2=A0 =C2=A0 effective procedural model of a compu= tational calculus within that
>>>>>> calculus (a model not
>>>>>>=C2=A0 =C2=A0 unlike a meta-circular interpreter, b= ut connected to the
>>>>>> fundamental operations of the
>>>>>>=C2=A0 =C2=A0 machine in such a way as to provide, = at any point in a
>>>>>> computation,
>>>>>> fully articulated
>>>>>>=C2=A0 =C2=A0 descriptions of the state of that com= putation, for inspection and
>>>>>> possible modification). In
>>>>>>=C2=A0 =C2=A0 terms of the theories that result fro= m these investigations, we
>>>>>> present a general architecture
>>>>>>=C2=A0 =C2=A0 for procedurally reflective processes= , able to shift smoothly
>>>>>> between dealing with a given
>>>>>>=C2=A0 =C2=A0 subject domain, and dealing with thei= r own reasoning processes
>>>>>> over
>>>>>> that domain.
>>>>>>
>>>>>>=C2=A0 =C2=A0 An instance of the general solution i= s worked out in the context
>>>>>> of
>>>>>> an applicative
>>>>>>=C2=A0 =C2=A0 language. Specifically, we present th= ree successive dialects of
>>>>>> LISP: 1-LISP, a distillation of
>>>>>>=C2=A0 =C2=A0 current practice, for comparison purp= oses; 2-LISP, a dialect
>>>>>> constructed in terms of our
>>>>>>=C2=A0 =C2=A0 rationalised semantics, in which the = concept of evaluation is
>>>>>> rejected in favour of
>>>>>>=C2=A0 =C2=A0 independent notions of simplification= and reference, and in which
>>>>>> the respective categories
>>>>>>=C2=A0 =C2=A0 of notation, structure, semantics, an= d behaviour are strictly
>>>>>> aligned; and 3-LISP, an
>>>>>>=C2=A0 =C2=A0 extension of 2-LISP endowed with refl= ective powers."
>>>>>>
>>>>>> Axiom SANE builds dependent types on the fly. The = ability to access
>>>>>> both the refection
>>>>>> of the tower of algebra and the reflection of the = tower of proofs at
>>>>>> the time of construction
>>>>>> makes the construction of a new domain or specific= algorithm easier
>>>>>> and more general.
>>>>>>
>>>>>> This is of particular interest because one of the = efforts is to build
>>>>>> "all the way down to the
>>>>>> metal". If each layer is constructed on top o= f previous proven layers
>>>>>> and the new layer
>>>>>> can "reach below" to lower layers then t= he tower of layers can be
>>>>>> built without duplication.
>>>>>>
>>>>>> Tim
>>>>>>
>>>>>> [0], Smith, Brian Cantwell "Reflection and Se= mantics in LISP"
>>>>>> POPL '84: Proceedings of the 11th ACM SIGACT-S= IGPLAN
>>>>>> ymposium on Principles of programming languagesJan= uary 1
>>>>>> 984 Pages 23=E2=80=9335https://doi.org= /10.1145/800017.800513
>>>>>>
>>>>>> On 6/29/21, Tim Daly <axiomcas@gmail.com> wrote:
>>>>>>> Having spent time playing with hardware it is = perfectly clear that
>>>>>>> future computational mathematics efforts need = to adapt to using
>>>>>>> parallel processing.
>>>>>>>
>>>>>>> I've spent a fair bit of time thinking abo= ut structuring Axiom to
>>>>>>> be parallel. Most past efforts have tried to f= ocus on making a
>>>>>>> particular algorithm parallel, such as a matri= x multiply.
>>>>>>>
>>>>>>> But I think that it might be more effective to= make each domain
>>>>>>> run in parallel. A computation crosses multipl= e domains so a
>>>>>>> particular computation could involve multiple = parallel copies.
>>>>>>>
>>>>>>> For example, computing the Cylindrical Algebra= ic Decomposition
>>>>>>> could recursively decompose the plane. Indeed,= any tree-recursive
>>>>>>> algorithm could be run in parallel "in th= e large" by creating new
>>>>>>> running copies of the domain for each sub-prob= lem.
>>>>>>>
>>>>>>> So the question becomes, how does one manage t= his?
>>>>>>>
>>>>>>> A similar problem occurs in robotics where one= could have multiple
>>>>>>> wheels, arms, propellers, etc. that need to ac= t independently but
>>>>>>> in coordination.
>>>>>>>
>>>>>>> The robot solution uses ROS2. The three ideas = are ROSCORE,
>>>>>>> TOPICS with publish/subscribe, and SERVICES wi= th request/response.
>>>>>>> These are communication paths defined between = processes.
>>>>>>>
>>>>>>> ROS2 has a "roscore" which is basica= lly a phonebook of "topics".
>>>>>>> Any process can create or look up the current = active topics. eq:
>>>>>>>
>>>>>>>=C2=A0 =C2=A0 rosnode list
>>>>>>>
>>>>>>> TOPICS:
>>>>>>>
>>>>>>> Any process can PUBLISH a topic (which is basi= cally a typed data
>>>>>>> structure), e.g the topic /hw with the String = data "Hello World".
>>>>>>> eg:
>>>>>>>
>>>>>>>=C2=A0 =C2=A0 rostopic pub /hw std_msgs/String = "Hello, World"
>>>>>>>
>>>>>>> Any process can SUBSCRIBE to a topic, such as = /hw, and get a
>>>>>>> copy of the data.=C2=A0 eg:
>>>>>>>
>>>>>>>=C2=A0 =C2=A0 rostopic echo /hw=C2=A0 =C2=A0=3D= =3D> "Hello, World"
>>>>>>>
>>>>>>> Publishers talk, subscribers listen.
>>>>>>>
>>>>>>>
>>>>>>> SERVICES:
>>>>>>>
>>>>>>> Any process can make a REQUEST of a SERVICE an= d get a RESPONSE.
>>>>>>> This is basically a remote function call.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> Axiom in parallel?
>>>>>>>
>>>>>>> So domains could run, each in its own process.= It could provide
>>>>>>> services, one for each function. Any other pro= cess could request
>>>>>>> a computation and get the result as a response= . Domains could
>>>>>>> request services from other domains, either wa= iting for responses
>>>>>>> or continuing while the response is being comp= uted.
>>>>>>>
>>>>>>> The output could be sent anywhere, to a termin= al, to a browser,
>>>>>>> to a network, or to another process using the = publish/subscribe
>>>>>>> protocol, potentially all at the same time sin= ce there can be many
>>>>>>> subscribers to a topic.
>>>>>>>
>>>>>>> Available domains could be dynamically added b= y announcing
>>>>>>> themselves as new "topics" and could= be dynamically looked-up
>>>>>>> at runtime.
>>>>>>>
>>>>>>> This structure allows function-level / domain-= level parallelism.
>>>>>>> It is very effective in the robot world and I = think it might be a
>>>>>>> good structuring mechanism to allow computatio= nal mathematics
>>>>>>> to take advantage of multiple processors in a = disciplined fashion.
>>>>>>>
>>>>>>> Axiom has a thousand domains and each could ru= n on its own core.
>>>>>>>
>>>>>>> In addition. notice that each domain is indepe= ndent of the others.
>>>>>>> So if we want to use BLAS Fortran code, it cou= ld just be another
>>>>>>> service node. In fact, any "foreign funct= ion" could transparently
>>>>>>> cooperate in a distributed Axiom.
>>>>>>>
>>>>>>> Another key feature is that proofs can be &quo= t;by node".
>>>>>>>
>>>>>>> Tim
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On 6/5/21, Tim Daly <axiomcas@gmail.com> wrote:
>>>>>>>> Axiom is based on first-class dependent ty= pes. Deciding when
>>>>>>>> two types are equivalent may involve compu= tation. See
>>>>>>>> Christiansen, David Thrane "Checking = Dependent Types with
>>>>>>>> Normalization by Evaluation" (2019) >>>>>>>>
>>>>>>>> This puts an interesting constraint on bui= lding types. The
>>>>>>>> constructed types has to export a function= to decide if a
>>>>>>>> given type is "equivalent" to it= self.
>>>>>>>>
>>>>>>>> The notion of "equivalence" migh= t involve category ideas
>>>>>>>> of natural transformation and univalence. = Sigh.
>>>>>>>>
>>>>>>>> That's an interesting design point. >>>>>>>>
>>>>>>>> Tim
>>>>>>>>
>>>>>>>>
>>>>>>>> On 5/5/21, Tim Daly <axiomcas@gmail.com> wrote:
>>>>>>>>> It is interesting that programmer'= s eyes and expectations adapt
>>>>>>>>> to the tools they use. For instance, I= use emacs and expect to
>>>>>>>>> work directly in files and multiple bu= ffers. When I try to use one
>>>>>>>>> of the many IDE tools I find they tend= to "get in the way". I
>>>>>>>>> already
>>>>>>>>> know or can quickly find whatever they= try to tell me. If you use
>>>>>>>>> an
>>>>>>>>> IDE you probably find emacs "too = sparse" for programming.
>>>>>>>>>
>>>>>>>>> Recently I've been working in a sp= arse programming environment.
>>>>>>>>> I'm exploring the question of runn= ing a proof checker in an FPGA.
>>>>>>>>> The FPGA development tools are painful= at best and not intuitive
>>>>>>>>> since you SEEM to be programming but y= ou're actually describing
>>>>>>>>> hardware gates, connections, and timin= g. This is an environment
>>>>>>>>> where everything happens all-at-once a= nd all-the-time (like the
>>>>>>>>> circuits in your computer). It is the = "assembly language of
>>>>>>>>> circuits".
>>>>>>>>> Naturally, my eyes have adapted to thi= s rather raw level.
>>>>>>>>>
>>>>>>>>> That said, I'm normally doing lite= rate programming all the time.
>>>>>>>>> My typical file is a document which is= a mixture of latex and
>>>>>>>>> lisp.
>>>>>>>>> It is something of a shock to return t= o that world. It is clear
>>>>>>>>> why
>>>>>>>>> people who program in Python find lisp= to be a "sea of parens".
>>>>>>>>> Yet as a lisp programmer, I don't = even see the parens, just code.
>>>>>>>>>
>>>>>>>>> It takes a few minutes in a literate d= ocument to adapt vision to
>>>>>>>>> see the latex / lisp combination as na= tural. The latex markup,
>>>>>>>>> like the lisp parens, eventually just = disappears. What remains
>>>>>>>>> is just lisp and natural language text= .
>>>>>>>>>
>>>>>>>>> This seems painful at first but eyes q= uickly adapt. The upside
>>>>>>>>> is that there is always a "finish= ed" document that describes the
>>>>>>>>> state of the code. The overhead of wri= ting a paragraph to
>>>>>>>>> describe a new function or change a pa= ragraph to describe the
>>>>>>>>> changed function is very small.
>>>>>>>>>
>>>>>>>>> Using a Makefile I latex the document = to generate a current PDF
>>>>>>>>> and then I extract, load, and execute = the code. This loop catches
>>>>>>>>> errors in both the latex and the sourc= e code. Keeping an open file
>>>>>>>>> in
>>>>>>>>> my pdf viewer shows all of the changes= in the document after every
>>>>>>>>> run of make. That way I can edit the b= ook as easily as the code.
>>>>>>>>>
>>>>>>>>> Ultimately I find that writing the boo= k while writing the code is
>>>>>>>>> more productive. I don't have to r= emember why I wrote something
>>>>>>>>> since the explanation is already there= .
>>>>>>>>>
>>>>>>>>> We all have our own way of programming= and our own tools.
>>>>>>>>> But I find literate programming to be = a real advance over IDE
>>>>>>>>> style programming and "raw code&q= uot; programming.
>>>>>>>>>
>>>>>>>>> Tim
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On 2/27/21, Tim Daly <axiomcas@gmail.com> wrote= :
>>>>>>>>>> The systems I use have the interes= ting property of
>>>>>>>>>> "Living within the compiler&q= uot;.
>>>>>>>>>>
>>>>>>>>>> Lisp, Forth, Emacs, and other syst= ems that present themselves
>>>>>>>>>> through the Read-Eval-Print-Loop (= REPL) allow the
>>>>>>>>>> ability to deeply interact with th= e system, shaping it to your
>>>>>>>>>> need.
>>>>>>>>>>
>>>>>>>>>> My current thread of study is soft= ware architecture. See
>>>>>>>>>> https://www.youtube.com/watch?v=3DW2hagw1VhhI&feature=3Dyoutu.= be
>>>>>>>>>> and https://www.geor= gefairbanks.com/videos/
>>>>>>>>>>
>>>>>>>>>> My current thinking on SANE involv= es the ability to
>>>>>>>>>> dynamically define categories, rep= resentations, and functions
>>>>>>>>>> along with "composition funct= ions" that permits choosing a
>>>>>>>>>> combination at the time of use. >>>>>>>>>>
>>>>>>>>>> You might want a domain for handli= ng polynomials. There are
>>>>>>>>>> a lot of choices, depending on you= r use case. You might want
>>>>>>>>>> different representations. For exa= mple, you might want dense,
>>>>>>>>>> sparse, recursive, or "machin= e compatible fixnums" (e.g. to
>>>>>>>>>> interface with C code). If these d= on't exist it ought to be
>>>>>>>>>> possible
>>>>>>>>>> to create them. Such "lego-li= ke" building blocks require careful
>>>>>>>>>> thought about creating "fully= factored" objects.
>>>>>>>>>>
>>>>>>>>>> Given that goal, the traditional b= arrier of "compiler" vs
>>>>>>>>>> "interpreter"
>>>>>>>>>> does not seem useful. It is better= to "live within the compiler"
>>>>>>>>>> which
>>>>>>>>>> gives the ability to define new th= ings "on the fly".
>>>>>>>>>>
>>>>>>>>>> Of course, the SANE compiler is go= ing to want an associated
>>>>>>>>>> proof of the functions you create = along with the other parts
>>>>>>>>>> such as its category hierarchy and= representation properties.
>>>>>>>>>>
>>>>>>>>>> There is no such thing as a simple= job. :-)
>>>>>>>>>>
>>>>>>>>>> Tim
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On 2/18/21, Tim Daly <axiomcas@gmail.com>= wrote:
>>>>>>>>>>> The Axiom SANE compiler / inte= rpreter has a few design points.
>>>>>>>>>>>
>>>>>>>>>>> 1) It needs to mix interpreted= and compiled code in the same
>>>>>>>>>>> function.
>>>>>>>>>>> SANE allows dynamic constructi= on of code as well as dynamic type
>>>>>>>>>>> construction at runtime. Both = of these can occur in a runtime
>>>>>>>>>>> object.
>>>>>>>>>>> So there is potentially a mixt= ure of interpreted and compiled
>>>>>>>>>>> code.
>>>>>>>>>>>
>>>>>>>>>>> 2) It needs to perform type re= solution at compile time without
>>>>>>>>>>> overhead
>>>>>>>>>>> where possible. Since this is = not always possible there needs to
>>>>>>>>>>> be
>>>>>>>>>>> a "prefix thunk" tha= t will perform the resolution. Trivially,
>>>>>>>>>>> for
>>>>>>>>>>> example,
>>>>>>>>>>> if we have a + function we nee= d to type-resolve the arguments.
>>>>>>>>>>>
>>>>>>>>>>> However, if we can prove at co= mpile time that the types are both
>>>>>>>>>>> bounded-NNI and the result is = bounded-NNI (i.e. fixnum in lisp)
>>>>>>>>>>> then we can inline a call to += at runtime. If not, we might have
>>>>>>>>>>> + applied to NNI and POLY(FLOA= T), which requires a thunk to
>>>>>>>>>>> resolve types. The thunk could= even "specialize and compile"
>>>>>>>>>>> the code before executing it.<= br> >>>>>>>>>>>
>>>>>>>>>>> It turns out that the Forth im= plementation of
>>>>>>>>>>> "threaded-interpreted&quo= t;
>>>>>>>>>>> languages model provides an ef= ficient and effective way to do
>>>>>>>>>>> this.[0]
>>>>>>>>>>> Type resolution can be "i= nserted" in intermediate thunks.
>>>>>>>>>>> The model also supports dynami= c overloading and tail recursion.
>>>>>>>>>>>
>>>>>>>>>>> Combining high-level CLOS code= with low-level threading gives an
>>>>>>>>>>> easy to understand and robust = design.
>>>>>>>>>>>
>>>>>>>>>>> Tim
>>>>>>>>>>>
>>>>>>>>>>> [0] Loeliger, R.G. "Threa= ded Interpretive Languages" (1981)
>>>>>>>>>>> ISBN 0-07-038360-X
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On 2/5/21, Tim Daly <axiomcas@gmail.com>= ; wrote:
>>>>>>>>>>>> I've worked hard to ma= ke Axiom depend on almost no other
>>>>>>>>>>>> tools so that it would not= get caught by "code rot" of
>>>>>>>>>>>> libraries.
>>>>>>>>>>>>
>>>>>>>>>>>> However, I'm also tryi= ng to make the new SANE version much
>>>>>>>>>>>> easier to understand and d= ebug.To that end I've been
>>>>>>>>>>>> experimenting
>>>>>>>>>>>> with some ideas.
>>>>>>>>>>>>
>>>>>>>>>>>> It should be possible to v= iew source code, of course. But the
>>>>>>>>>>>> source
>>>>>>>>>>>> code is not the only, nor = possibly the best, representation of
>>>>>>>>>>>> the
>>>>>>>>>>>> ideas.
>>>>>>>>>>>> In particular, source code= gets compiled into data structures.
>>>>>>>>>>>> In
>>>>>>>>>>>> Axiom
>>>>>>>>>>>> these data structures real= ly are a graph of related structures.
>>>>>>>>>>>>
>>>>>>>>>>>> For example, looking at th= e gcd function from NNI, there is the
>>>>>>>>>>>> representation of the gcd = function itself. But there is also a
>>>>>>>>>>>> structure
>>>>>>>>>>>> that is the REP (and, in t= he new system, is separate from the
>>>>>>>>>>>> domain).
>>>>>>>>>>>>
>>>>>>>>>>>> Further, there are associa= ted specification and proof
>>>>>>>>>>>> structures.
>>>>>>>>>>>> Even
>>>>>>>>>>>> further, the domain inheri= ts the category structures, and from
>>>>>>>>>>>> those
>>>>>>>>>>>> it
>>>>>>>>>>>> inherits logical axioms an= d definitions through the proof
>>>>>>>>>>>> structure.
>>>>>>>>>>>>
>>>>>>>>>>>> Clearly the gcd function i= s a node in a much larger graph
>>>>>>>>>>>> structure.
>>>>>>>>>>>>
>>>>>>>>>>>> When trying to decide why = code won't compile it would be useful
>>>>>>>>>>>> to
>>>>>>>>>>>> be able to see and walk th= ese structures. I've thought about
>>>>>>>>>>>> using
>>>>>>>>>>>> the
>>>>>>>>>>>> browser but browsers are t= oo weak. Either everything has to be
>>>>>>>>>>>> "in
>>>>>>>>>>>> a
>>>>>>>>>>>> single tab to show the gra= ph" or "the nodes of the graph are in
>>>>>>>>>>>> different
>>>>>>>>>>>> tabs". Plus, construc= ting dynamic graphs that change as the
>>>>>>>>>>>> software
>>>>>>>>>>>> changes (e.g. by loading a= new spad file or creating a new
>>>>>>>>>>>> function)
>>>>>>>>>>>> represents the huge proble= m of keeping the browser "in sync
>>>>>>>>>>>> with
>>>>>>>>>>>> the
>>>>>>>>>>>> Axiom workspace". So = something more dynamic and embedded is
>>>>>>>>>>>> needed.
>>>>>>>>>>>>
>>>>>>>>>>>> Axiom source gets compiled= into CLOS data structures. Each of
>>>>>>>>>>>> these
>>>>>>>>>>>> new SANE structures has an= associated surface representation,
>>>>>>>>>>>> so
>>>>>>>>>>>> they
>>>>>>>>>>>> can be presented in user-f= riendly form.
>>>>>>>>>>>>
>>>>>>>>>>>> Also, since Axiom is liter= ate software, it should be possible
>>>>>>>>>>>> to
>>>>>>>>>>>> look
>>>>>>>>>>>> at
>>>>>>>>>>>> the code in its literate f= orm with the surrounding explanation.
>>>>>>>>>>>>
>>>>>>>>>>>> Essentially we'd like = to have the ability to "deep dive" into
>>>>>>>>>>>> the
>>>>>>>>>>>> Axiom
>>>>>>>>>>>> workspace, not only for de= bugging, but also for understanding
>>>>>>>>>>>> what
>>>>>>>>>>>> functions are used, where = they come from, what they inherit,
>>>>>>>>>>>> and
>>>>>>>>>>>> how they are used in a com= putation.
>>>>>>>>>>>>
>>>>>>>>>>>> To that end I'm lookin= g at using McClim, a lisp windowing
>>>>>>>>>>>> system.
>>>>>>>>>>>> Since the McClim windows w= ould be part of the lisp image, they
>>>>>>>>>>>> have
>>>>>>>>>>>> access to display (and mod= ify) the Axiom workspace at all
>>>>>>>>>>>> times.
>>>>>>>>>>>>
>>>>>>>>>>>> The only hesitation is tha= t McClim uses quicklisp and drags in
>>>>>>>>>>>> a
>>>>>>>>>>>> lot
>>>>>>>>>>>> of other subsystems. It= 9;s all lisp, of course.
>>>>>>>>>>>>
>>>>>>>>>>>> These ideas aren't new= . They were available on Symbolics
>>>>>>>>>>>> machines,
>>>>>>>>>>>> a truly productive platfor= m and one I sorely miss.
>>>>>>>>>>>>
>>>>>>>>>>>> Tim
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> On 1/19/21, Tim Daly <<= a href=3D"mailto:axiomcas@gmail.com" target=3D"_blank">axiomcas@gmail.com> wrote:
>>>>>>>>>>>>> Also of interest is th= e talk
>>>>>>>>>>>>> "The Unreasonable= Effectiveness of Dynamic Typing for
>>>>>>>>>>>>> Practical
>>>>>>>>>>>>> Programs"
>>>>>>>>>>>>> https://vimeo.com/743= 54480
>>>>>>>>>>>>> which questions whethe= r static typing really has any benefit.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Tim
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> On 1/19/21, Tim Daly &= lt;axiomcas@gmail.c= om> wrote:
>>>>>>>>>>>>>> Peter Naur wrote a= n article of interest:
>>>>>>>>>>>>>> htt= p://pages.cs.wisc.edu/~remzi/Naur.pdf
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> In particular, it = mirrors my notion that Axiom needs
>>>>>>>>>>>>>> to embrace literat= e programming so that the "theory
>>>>>>>>>>>>>> of the problem&quo= t; is presented as well as the "theory
>>>>>>>>>>>>>> of the solution&qu= ot;. I quote the introduction:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> This article is, t= o my mind, the most accurate account
>>>>>>>>>>>>>> of what goes on in= designing and coding a program.
>>>>>>>>>>>>>> I refer to it regu= larly when discussing how much
>>>>>>>>>>>>>> documentation to c= reate, how to pass along tacit
>>>>>>>>>>>>>> knowledge, and the= value of the XP's metaphor-setting
>>>>>>>>>>>>>> exercise. It also = provides a way to examine a methodolgy's
>>>>>>>>>>>>>> economic structure= .
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> In the article, wh= ich follows, note that the quality of the
>>>>>>>>>>>>>> designing programm= er's work is related to the quality of
>>>>>>>>>>>>>> the match between = his theory of the problem and his theory
>>>>>>>>>>>>>> of the solution. N= ote that the quality of a later
>>>>>>>>>>>>>> programmer's >>>>>>>>>>>>>> work is related to= the match between his theories and the
>>>>>>>>>>>>>> previous programme= r's theories.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Using Naur's i= deas, the designer's job is not to pass along
>>>>>>>>>>>>>> "the design&q= uot; but to pass along "the theories" driving the
>>>>>>>>>>>>>> design.
>>>>>>>>>>>>>> The latter goal is= more useful and more appropriate. It also
>>>>>>>>>>>>>> highlights that kn= owledge of the theory is tacit in the
>>>>>>>>>>>>>> owning,
>>>>>>>>>>>>>> and
>>>>>>>>>>>>>> so passing along t= he thoery requires passing along both
>>>>>>>>>>>>>> explicit
>>>>>>>>>>>>>> and tacit knowledg= e.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Tim
>>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>
--00000000000078965305d9f2670c-- From MAILER-DAEMON Sun Mar 13 04:01:45 2022 Received: from list by lists.gnu.org with archive (Exim 4.90_1) id 1nTJAf-0001oT-Cm for mharc-axiom-developer@gnu.org; Sun, 13 Mar 2022 04:01:45 -0400 Received: from eggs.gnu.org ([209.51.188.92]:37390) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1nTJAd-0001oK-S7 for axiom-developer@nongnu.org; Sun, 13 Mar 2022 04:01:44 -0400 Received: from [2607:f8b0:4864:20::730] (port=46648 helo=mail-qk1-x730.google.com) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1nTJAU-0008RO-L8 for axiom-developer@nongnu.org; Sun, 13 Mar 2022 04:01:43 -0400 Received: by mail-qk1-x730.google.com with SMTP id r127so10416711qke.13 for ; Sun, 13 Mar 2022 00:01:33 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to; bh=GYvTKxmIt+y4UhyKg26V5MzrHKhSHyopH3ygQAyIo3Y=; b=cTtsM7NpdjSwObwAykZnWNtQQggWLgOIGwwk7wO/dFIwgNlaDcefa5qg/pgfqbyfFi K3nSRqPKqK+1jZfmtAh9RwSY48j0Blu3fZTRRkY2CcNioZPTNElwXisuw9PTU79joHma +bPzOIxU6z+3jG8ZynyTXC1QGetKK3oL9yhAQmpRxRRxAGqFj+Hn9RP4GDWWAyHyZRbI oJEs785un3QbT5jh5tT+nrFP8R0kvYrVfiyP+mQlVND5wM7ikyoAasjZnayjuVs84uY6 pcOnNFiRUUz6YspdSCrvnJGTTERpCvPvGZE3/4747h9ZMPBMyahBkd5bZ3YQd2qrlDHt 4d1Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to; bh=GYvTKxmIt+y4UhyKg26V5MzrHKhSHyopH3ygQAyIo3Y=; b=TjjxvE+SmqC/Ele6jdKB1ag9RRkCRIFHLx55o4cR3ltGX2n3IAnJh5LoE4IQasJpNM zivcXYR1jL8uOdO3IBv0Asqwo7gy90P+gy6nF0t9lBwGyBjtX5ZmehSNw8UGbupv9J2m K+zmt9NVyqkc5bU53kEKUuNj2MkD+WF9G4LYPpjXY6RlZmjj4fhztBLBMSklAitBu+Ho L9Z1mUgGIsY5swJpYgUG/Xhq9Qee2lZIXkAEcFx+5vt0aw40FibYgv76qrdTUxd+aP2d FExogtyKkrUY9pCpik8Xn1reiLgif/sLkOAXecY8r03ck/O6GQ2oS4fu3peu7hqT0C2e OaGg== X-Gm-Message-State: AOAM533lKpMPryhZZO0YGbdGWhWLssw0XVVOQTcX6Xelo9vTqnQSpd9C 1meaRI8T42ktmc7soKLAfJWjT4QRbU1zFa0gF7pzIto6vuU= X-Google-Smtp-Source: ABdhPJzDKwXAHTi43IkB/j1FwEUpCv5lcFBE+n37cemlLoYWTWxWLf3uGkFrnRng1nTypcjWnx3UDEAjM+yPl9ycdPo= X-Received: by 2002:a37:9e16:0:b0:67c:e9ad:8e1d with SMTP id h22-20020a379e16000000b0067ce9ad8e1dmr11141426qke.505.1647158492078; Sun, 13 Mar 2022 00:01:32 -0800 (PST) MIME-Version: 1.0 References: <1601071096648.17321@ccny.cuny.edu> <20200928114929.GA1113@math.uni.wroc.pl> <1607842080069.93596@ccny.cuny.edu> <1607845528644.76141@ccny.cuny.edu> In-Reply-To: From: Tim Daly Date: Sun, 13 Mar 2022 04:01:19 -0400 Message-ID: Subject: Re: Axiom musings... To: axiom-dev , Tim Daly Content-Type: multipart/alternative; boundary="000000000000cf249d05da14f883" X-Host-Lookup-Failed: Reverse DNS lookup failed for 2607:f8b0:4864:20::730 (failed) Received-SPF: pass client-ip=2607:f8b0:4864:20::730; envelope-from=axiomcas@gmail.com; helo=mail-qk1-x730.google.com X-Spam_score_int: 6 X-Spam_score: 0.6 X-Spam_bar: / X-Spam_report: (0.6 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, FREEMAIL_FROM=0.001, HTML_MESSAGE=0.001, PDS_HP_HELO_NORDNS=0.659, RCVD_IN_DNSWL_NONE=-0.0001, RDNS_NONE=0.793, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01, URI_DOTEDU=1.246 autolearn=no autolearn_force=no X-Spam_action: no action X-BeenThere: axiom-developer@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Axiom Developers List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 13 Mar 2022 08:01:44 -0000 --000000000000cf249d05da14f883 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Axiom has an awkward 'attributes' category structure. In the SANE version it is clear that these attributes are much closer to logic 'definitions'. As a result one of the changes is to create a new 'category'-type structure for definitions. There will be a new keyword, like the category keyword, 'definition'. Tim On Fri, Mar 11, 2022 at 9:46 AM Tim Daly wrote: > The github lockout continues... > > I'm spending some time adding examples to source code. > > Any function can have ++X comments added. These will > appear as examples when the function is )display For example, > in PermutationGroup there is a function 'strongGenerators' > defined as: > > strongGenerators : % -> L PERM S > ++ strongGenerators(gp) returns strong generators for > ++ the group gp. > ++ > ++X S:List(Integer) :=3D [1,2,3,4] > ++X G :=3D symmetricGroup(S) > ++X strongGenerators(G) > > > > Later, in the interpreter we see: > > > > > )d op strongGenerators > > There is one exposed function called strongGenerators : > [1] PermutationGroup(D2) -> List(Permutation(D2)) from > PermutationGroup(D2) > if D2 has SETCAT > > Examples of strongGenerators from PermutationGroup > > S:List(Integer) :=3D [1,2,3,4] > G :=3D symmetricGroup(S) > strongGenerators(G) > > > > > This will show a working example for functions that the > user can copy and use. It is especially useful to show how > to construct working arguments. > > These "example" functions are run at build time when > the make command looks like > make TESTSET=3Dalltests > > I hope to add this documentation to all Axiom functions. > > In addition, the plan is to add these function calls to the > usual test documentation. That means that all of these examples > will be run and show their output in the final distribution > (mnt/ubuntu/doc/src/input/*.dvi files) so the user can view > the expected output. > > Tim > > > > > On Fri, Feb 25, 2022 at 6:05 PM Tim Daly wrote: > >> It turns out that creating SPAD-looking output is trivial >> in Common Lisp. Each class can have a custom print >> routine so signatures and ++ comments can each be >> printed with their own format. >> >> To ensure that I maintain compatibility I'll be printing >> the categories and domains so they look like SPAD code, >> at least until I get the proof technology integrated. I will >> probably specialize the proof printers to look like the >> original LEAN proof syntax. >> >> Internally, however, it will all be Common Lisp. >> >> Common Lisp makes so many desirable features so easy. >> It is possible to trace dynamically at any level. One could >> even write a trace that showed how Axiom arrived at the >> solution. Any domain could have special case output syntax >> without affecting any other domain so one could write a >> tree-like output for proofs. Using greek characters is trivial >> so the input and output notation is more mathematical. >> >> Tim >> >> >> On Thu, Feb 24, 2022 at 10:24 AM Tim Daly wrote: >> >>> Axiom's SPAD code compiles to Common Lisp. >>> The AKCL version of Common Lisp compiles to C. >>> Three languages and 2 compilers is a lot to maintain. >>> Further, there are very few people able to write SPAD >>> and even fewer people able to maintain it. >>> >>> I've decided that the SANE version of Axiom will be >>> implemented in pure Common Lisp. I've outlined Axiom's >>> category / type hierarchy in the Common Lisp Object >>> System (CLOS). I am now experimenting with re-writing >>> the functions into Common Lisp. >>> >>> This will have several long-term effects. It simplifies >>> the implementation issues. SPAD code blocks a lot of >>> actions and optimizations that Common Lisp provides. >>> The Common Lisp language has many more people >>> who can read, modify, and maintain code. It provides for >>> interoperability with other Common Lisp projects with >>> no effort. Common Lisp is an international standard >>> which ensures that the code will continue to run. >>> >>> The input / output mathematics will remain the same. >>> Indeed, with the new generalizations for first-class >>> dependent types it will be more general. >>> >>> This is a big change, similar to eliminating BOOT code >>> and moving to Literate Programming. This will provide a >>> better platform for future research work. Current research >>> is focused on merging Axiom's computer algebra mathematics >>> with Lean's proof language. The goal is to create a system for >>> "computational mathematics". >>> >>> Research is the whole point of Axiom. >>> >>> Tim >>> >>> >>> >>> On Sat, Jan 22, 2022 at 9:16 PM Tim Daly wrote: >>> >>>> I can't stress enough how important it is to listen to Hamming's talk >>>> https://www.youtube.com/watch?v=3Da1zDuOPkMSw >>>> >>>> Axiom will begin to die the day I stop working on it. >>>> >>>> However, proving Axiom correct "down to the metal", is fundamental. >>>> It will merge computer algebra and logic, spawning years of new >>>> research. >>>> >>>> Work on fundamental problems. >>>> >>>> Tim >>>> >>>> >>>> On Thu, Dec 30, 2021 at 6:46 PM Tim Daly wrote: >>>> >>>>> One of the interesting questions when obtaining a result >>>>> is "what functions were called and what was their return value?" >>>>> Otherwise known as the "show your work" idea. >>>>> >>>>> There is an idea called the "writer monad" [0], usually >>>>> implemented to facilitate logging. We can exploit this >>>>> idea to provide "show your work" capability. Each function >>>>> can provide this information inside the monad enabling the >>>>> question to be answered at any time. >>>>> >>>>> For those unfamiliar with the monad idea, the best explanation >>>>> I've found is this video [1]. >>>>> >>>>> Tim >>>>> >>>>> [0] Deriving the writer monad from first principles >>>>> https://williamyaoh.com/posts/2020-07-26-deriving-writer-monad.html >>>>> >>>>> [1] The Absolute Best Intro to Monads for Software Engineers >>>>> https://www.youtube.com/watch?v=3DC2w45qRc3aU >>>>> >>>>> On Mon, Dec 13, 2021 at 12:30 AM Tim Daly wrote: >>>>> >>>>>> ...(snip)... >>>>>> >>>>>> Common Lisp has an "open compiler". That allows the ability >>>>>> to deeply modify compiler behavior using compiler macros >>>>>> and macros in general. CLOS takes advantage of this to add >>>>>> typed behavior into the compiler in a way that ALLOWS strict >>>>>> typing such as found in constructive type theory and ML. >>>>>> Judgments, ala Crary, are front-and-center. >>>>>> >>>>>> Whether you USE the discipline afforded is the real question. >>>>>> >>>>>> Indeed, the Axiom research struggle is essentially one of how >>>>>> to have a disciplined use of first-class dependent types. The >>>>>> struggle raises issues of, for example, compiling a dependent >>>>>> type whose argument is recursive in the compiled type. Since >>>>>> the new type is first-class it can be constructed at what you >>>>>> improperly call "run-time". However, it appears that the recursive >>>>>> type may have to call the compiler at each recursion to generate >>>>>> the next step since in some cases it cannot generate "closed code". >>>>>> >>>>>> I am embedding proofs (in LEAN language) into the type >>>>>> hierarchy so that theorems, which depend on the type hierarchy, >>>>>> are correctly inherited. The compiler has to check the proofs of >>>>>> functions >>>>>> at compile time using these. Hacking up nonsense just won't cut it. >>>>>> Think >>>>>> of the problem of embedding LEAN proofs in ML or ML in LEAN. >>>>>> (Actually, Jeremy Avigad might find that research interesting.) >>>>>> >>>>>> So Matrix(3,3,Float) has inverses (assuming Float is a >>>>>> field (cough)). The type inherits this theorem and proofs of >>>>>> functions can use this. But Matrix(3,4,Integer) does not have >>>>>> inverses so the proofs cannot use this. The type hierarchy has >>>>>> to ensure that the proper theorems get inherited. >>>>>> >>>>>> Making proof technology work at compile time is hard. >>>>>> (Worse yet, LEAN is a moving target. Sigh.) >>>>>> >>>>>> >>>>>> >>>>>> On Thu, Nov 25, 2021 at 9:43 AM Tim Daly wrote: >>>>>> >>>>>>> >>>>>>> As you know I've been re-architecting Axiom to use first class >>>>>>> dependent types and proving the algorithms correct. For example, >>>>>>> the GCD of natural numbers or the GCD of polynomials. >>>>>>> >>>>>>> The idea involves "boxing up" the proof with the algorithm (aka >>>>>>> proof carrying code) in the ELF file (under a crypto hash so it >>>>>>> can't be changed). >>>>>>> >>>>>>> Once the code is running on the CPU, the proof is run in parallel >>>>>>> on the field programmable gate array (FPGA). Intel data center >>>>>>> servers have CPUs with built-in FPGAs these days. >>>>>>> >>>>>>> There is a bit of a disconnect, though. The GCD code is compiled >>>>>>> machine code but the proof is LEAN-level. >>>>>>> >>>>>>> What would be ideal is if the compiler not only compiled the GCD >>>>>>> code to machine code, it also compiled the proof to "machine code". >>>>>>> That is, for each machine instruction, the FPGA proof checker >>>>>>> would ensure that the proof was not violated at the individual >>>>>>> instruction level. >>>>>>> >>>>>>> What does it mean to "compile a proof to the machine code level"? >>>>>>> >>>>>>> The Milawa effort (Myre14.pdf) does incremental proofs in layers. >>>>>>> To quote from the article [0]: >>>>>>> >>>>>>> We begin with a simple proof checker, call it A, which is short >>>>>>> enough to verify by the ``social process'' of mathematics -- and >>>>>>> more recently with a theorem prover for a more expressive logic. >>>>>>> >>>>>>> We then develop a series of increasingly powerful proof checkers= , >>>>>>> call the B, C, D, and so on. We show each of these programs only >>>>>>> accepts the same formulas as A, using A to verify B, and B to >>>>>>> verify >>>>>>> C, and so on. Then, since we trust A, and A says B is >>>>>>> trustworthy, we >>>>>>> can trust B. Then, since we trust B, and B says C is trustworthy= , >>>>>>> we >>>>>>> can trust C. >>>>>>> >>>>>>> This gives a technique for "compiling the proof" down the the machi= ne >>>>>>> code level. Ideally, the compiler would have judgments for each ste= p >>>>>>> of >>>>>>> the compilation so that each compile step has a justification. I >>>>>>> don't >>>>>>> know of any compiler that does this yet. (References welcome). >>>>>>> >>>>>>> At the machine code level, there are techniques that would allow >>>>>>> the FPGA proof to "step in sequence" with the executing code. >>>>>>> Some work has been done on using "Hoare Logic for Realistically >>>>>>> Modelled Machine Code" (paper attached, Myre07a.pdf), >>>>>>> "Decompilation into Logic -- Improved (Myre12a.pdf). >>>>>>> >>>>>>> So the game is to construct a GCD over some type (Nats, Polys, etc. >>>>>>> Axiom has 22), compile the dependent type GCD to machine code. >>>>>>> In parallel, the proof of the code is compiled to machine code. The >>>>>>> pair is sent to the CPU/FPGA and, while the algorithm runs, the FPG= A >>>>>>> ensures the proof is not violated, instruction by instruction. >>>>>>> >>>>>>> (I'm ignoring machine architecture issues such pipelining, >>>>>>> out-of-order, >>>>>>> branch prediction, and other machine-level things to ponder. I'm >>>>>>> looking >>>>>>> at the RISC-V Verilog details by various people to understand bette= r >>>>>>> but >>>>>>> it is still a "misty fog" for me.) >>>>>>> >>>>>>> The result is proven code "down to the metal". >>>>>>> >>>>>>> Tim >>>>>>> >>>>>>> >>>>>>> >>>>>>> [0] >>>>>>> https://www.cs.utexas.edu/users/moore/acl2/manuals/current/manual/i= ndex-seo.php/ACL2____MILAWA >>>>>>> >>>>>>> On Thu, Nov 25, 2021 at 6:05 AM Tim Daly wrote= : >>>>>>> >>>>>>>> >>>>>>>> As you know I've been re-architecting Axiom to use first class >>>>>>>> dependent types and proving the algorithms correct. For example, >>>>>>>> the GCD of natural numbers or the GCD of polynomials. >>>>>>>> >>>>>>>> The idea involves "boxing up" the proof with the algorithm (aka >>>>>>>> proof carrying code) in the ELF file (under a crypto hash so it >>>>>>>> can't be changed). >>>>>>>> >>>>>>>> Once the code is running on the CPU, the proof is run in parallel >>>>>>>> on the field programmable gate array (FPGA). Intel data center >>>>>>>> servers have CPUs with built-in FPGAs these days. >>>>>>>> >>>>>>>> There is a bit of a disconnect, though. The GCD code is compiled >>>>>>>> machine code but the proof is LEAN-level. >>>>>>>> >>>>>>>> What would be ideal is if the compiler not only compiled the GCD >>>>>>>> code to machine code, it also compiled the proof to "machine code"= . >>>>>>>> That is, for each machine instruction, the FPGA proof checker >>>>>>>> would ensure that the proof was not violated at the individual >>>>>>>> instruction level. >>>>>>>> >>>>>>>> What does it mean to "compile a proof to the machine code level"? >>>>>>>> >>>>>>>> The Milawa effort (Myre14.pdf) does incremental proofs in layers. >>>>>>>> To quote from the article [0]: >>>>>>>> >>>>>>>> We begin with a simple proof checker, call it A, which is short >>>>>>>> enough to verify by the ``social process'' of mathematics -- an= d >>>>>>>> more recently with a theorem prover for a more expressive logic. >>>>>>>> >>>>>>>> We then develop a series of increasingly powerful proof checker= s, >>>>>>>> call the B, C, D, and so on. We show each of these programs only >>>>>>>> accepts the same formulas as A, using A to verify B, and B to >>>>>>>> verify >>>>>>>> C, and so on. Then, since we trust A, and A says B is >>>>>>>> trustworthy, we >>>>>>>> can trust B. Then, since we trust B, and B says C is >>>>>>>> trustworthy, we >>>>>>>> can trust C. >>>>>>>> >>>>>>>> This gives a technique for "compiling the proof" down the the >>>>>>>> machine >>>>>>>> code level. Ideally, the compiler would have judgments for each >>>>>>>> step of >>>>>>>> the compilation so that each compile step has a justification. I >>>>>>>> don't >>>>>>>> know of any compiler that does this yet. (References welcome). >>>>>>>> >>>>>>>> At the machine code level, there are techniques that would allow >>>>>>>> the FPGA proof to "step in sequence" with the executing code. >>>>>>>> Some work has been done on using "Hoare Logic for Realistically >>>>>>>> Modelled Machine Code" (paper attached, Myre07a.pdf), >>>>>>>> "Decompilation into Logic -- Improved (Myre12a.pdf). >>>>>>>> >>>>>>>> So the game is to construct a GCD over some type (Nats, Polys, etc= . >>>>>>>> Axiom has 22), compile the dependent type GCD to machine code. >>>>>>>> In parallel, the proof of the code is compiled to machine code. Th= e >>>>>>>> pair is sent to the CPU/FPGA and, while the algorithm runs, the FP= GA >>>>>>>> ensures the proof is not violated, instruction by instruction. >>>>>>>> >>>>>>>> (I'm ignoring machine architecture issues such pipelining, >>>>>>>> out-of-order, >>>>>>>> branch prediction, and other machine-level things to ponder. I'm >>>>>>>> looking >>>>>>>> at the RISC-V Verilog details by various people to understand >>>>>>>> better but >>>>>>>> it is still a "misty fog" for me.) >>>>>>>> >>>>>>>> The result is proven code "down to the metal". >>>>>>>> >>>>>>>> Tim >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> [0] >>>>>>>> https://www.cs.utexas.edu/users/moore/acl2/manuals/current/manual/= index-seo.php/ACL2____MILAWA >>>>>>>> >>>>>>>> On Sat, Nov 13, 2021 at 5:28 PM Tim Daly >>>>>>>> wrote: >>>>>>>> >>>>>>>>> Full support for general, first-class dependent types requires >>>>>>>>> some changes to the Axiom design. That implies some language >>>>>>>>> design questions. >>>>>>>>> >>>>>>>>> Given that mathematics is such a general subject with a lot of >>>>>>>>> "local" notation and ideas (witness logical judgment notation) >>>>>>>>> careful thought is needed to design a language that is able to >>>>>>>>> handle a wide range. >>>>>>>>> >>>>>>>>> Normally language design is a two-level process. The language >>>>>>>>> designer creates a language and then an implementation. Various >>>>>>>>> design choices affect the final language. >>>>>>>>> >>>>>>>>> There is "The Metaobject Protocol" (MOP) >>>>>>>>> >>>>>>>>> https://www.amazon.com/Art-Metaobject-Protocol-Gregor-Kiczales/dp= /0262610744 >>>>>>>>> which encourages a three-level process. The language designer >>>>>>>>> works at a Metalevel to design a family of languages, then the >>>>>>>>> language specializations, then the implementation. A MOP design >>>>>>>>> allows the language user to optimize the language to their proble= m. >>>>>>>>> >>>>>>>>> A simple paper on the subject is "Metaobject Protocols" >>>>>>>>> https://users.cs.duke.edu/~vahdat/ps/mop.pdf >>>>>>>>> >>>>>>>>> Tim >>>>>>>>> >>>>>>>>> >>>>>>>>> On Mon, Oct 25, 2021 at 7:42 PM Tim Daly >>>>>>>>> wrote: >>>>>>>>> >>>>>>>>>> I have a separate thread of research on Self-Replicating Systems >>>>>>>>>> (ref: Kinematics of Self Reproducing Machines >>>>>>>>>> http://www.molecularassembler.com/KSRM.htm) >>>>>>>>>> >>>>>>>>>> which led to watching "Strange Dreams of Stranger Loops" by Will >>>>>>>>>> Byrd >>>>>>>>>> https://www.youtube.com/watch?v=3DAffW-7ika0E >>>>>>>>>> >>>>>>>>>> Will referenced a PhD Thesis by Jon Doyle >>>>>>>>>> "A Model for Deliberation, Action, and Introspection" >>>>>>>>>> >>>>>>>>>> I also read the thesis by J.C.G. Sturdy >>>>>>>>>> "A Lisp through the Looking Glass" >>>>>>>>>> >>>>>>>>>> Self-replication requires the ability to manipulate your own >>>>>>>>>> representation in such a way that changes to that representation >>>>>>>>>> will change behavior. >>>>>>>>>> >>>>>>>>>> This leads to two thoughts in the SANE research. >>>>>>>>>> >>>>>>>>>> First, "Declarative Representation". That is, most of the things >>>>>>>>>> about the representation should be declarative rather than >>>>>>>>>> procedural. Applying this idea as much as possible makes it >>>>>>>>>> easier to understand and manipulate. >>>>>>>>>> >>>>>>>>>> Second, "Explicit Call Stack". Function calls form an implicit >>>>>>>>>> call stack. This can usually be displayed in a running lisp >>>>>>>>>> system. >>>>>>>>>> However, having the call stack explicitly available would mean >>>>>>>>>> that a system could "introspect" at the first-class level. >>>>>>>>>> >>>>>>>>>> These two ideas would make it easy, for example, to let the >>>>>>>>>> system "show the work". One of the normal complaints is that >>>>>>>>>> a system presents an answer but there is no way to know how >>>>>>>>>> that answer was derived. These two ideas make it possible to >>>>>>>>>> understand, display, and even post-answer manipulate >>>>>>>>>> the intermediate steps. >>>>>>>>>> >>>>>>>>>> Having the intermediate steps also allows proofs to be >>>>>>>>>> inserted in a step-by-step fashion. This aids the effort to >>>>>>>>>> have proofs run in parallel with computation at the hardware >>>>>>>>>> level. >>>>>>>>>> >>>>>>>>>> Tim >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> On Thu, Oct 21, 2021 at 9:50 AM Tim Daly >>>>>>>>>> wrote: >>>>>>>>>> >>>>>>>>>>> So the current struggle involves the categories in Axiom. >>>>>>>>>>> >>>>>>>>>>> The categories and domains constructed using categories >>>>>>>>>>> are dependent types. When are dependent types "equal"? >>>>>>>>>>> Well, hummmm, that depends on the arguments to the >>>>>>>>>>> constructor. >>>>>>>>>>> >>>>>>>>>>> But in order to decide(?) equality we have to evaluate >>>>>>>>>>> the arguments (which themselves can be dependent types). >>>>>>>>>>> Indeed, we may, and in general, we must evaluate the >>>>>>>>>>> arguments at compile time (well, "construction time" as >>>>>>>>>>> there isn't really a compiler / interpreter separation anymore.= ) >>>>>>>>>>> >>>>>>>>>>> That raises the question of what "equality" means. This >>>>>>>>>>> is not simply a "set equality" relation. It falls into the >>>>>>>>>>> infinite-groupoid of homotopy type theory. In general >>>>>>>>>>> it appears that deciding category / domain equivalence >>>>>>>>>>> might force us to climb the type hierarchy. >>>>>>>>>>> >>>>>>>>>>> Beyond that, there is the question of "which proof" >>>>>>>>>>> applies to the resulting object. Proofs depend on their >>>>>>>>>>> assumptions which might be different for different >>>>>>>>>>> constructions. As yet I have no clue how to "index" >>>>>>>>>>> proofs based on their assumptions, nor how to >>>>>>>>>>> connect these assumptions to the groupoid structure. >>>>>>>>>>> >>>>>>>>>>> My brain hurts. >>>>>>>>>>> >>>>>>>>>>> Tim >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> On Mon, Oct 18, 2021 at 2:00 AM Tim Daly >>>>>>>>>>> wrote: >>>>>>>>>>> >>>>>>>>>>>> "Birthing Computational Mathematics" >>>>>>>>>>>> >>>>>>>>>>>> The Axiom SANE project is difficult at a very fundamental >>>>>>>>>>>> level. The title "SANE" was chosen due to the various >>>>>>>>>>>> words found in a thesuarus... "rational", "coherent", >>>>>>>>>>>> "judicious" and "sound". >>>>>>>>>>>> >>>>>>>>>>>> These are very high level, amorphous ideas. But so is >>>>>>>>>>>> the design of SANE. Breaking away from tradition in >>>>>>>>>>>> computer algebra, type theory, and proof assistants >>>>>>>>>>>> is very difficult. Ideas tend to fall into standard jargon >>>>>>>>>>>> which limits both the frame of thinking (e.g. dependent >>>>>>>>>>>> types) and the content (e.g. notation). >>>>>>>>>>>> >>>>>>>>>>>> Questioning both frame and content is very difficult. >>>>>>>>>>>> It is hard to even recognize when they are accepted >>>>>>>>>>>> "by default" rather than "by choice". What does the idea >>>>>>>>>>>> "power tools" mean in a primitive, hand labor culture? >>>>>>>>>>>> >>>>>>>>>>>> Christopher Alexander [0] addresses this problem in >>>>>>>>>>>> a lot of his writing. Specifically, in his book "Notes on >>>>>>>>>>>> the Synthesis of Form", in his chapter 5 "The Selfconsious >>>>>>>>>>>> Process", he addresses this problem directly. This is a >>>>>>>>>>>> "must read" book. >>>>>>>>>>>> >>>>>>>>>>>> Unlike building design and contruction, however, there >>>>>>>>>>>> are almost no constraints to use as guides. Alexander >>>>>>>>>>>> quotes Plato's Phaedrus: >>>>>>>>>>>> >>>>>>>>>>>> "First, the taking in of scattered particulars under >>>>>>>>>>>> one Idea, so that everyone understands what is being >>>>>>>>>>>> talked about ... Second, the separation of the Idea >>>>>>>>>>>> into parts, by dividing it at the joints, as nature >>>>>>>>>>>> directs, not breaking any limb in half as a bad >>>>>>>>>>>> carver might." >>>>>>>>>>>> >>>>>>>>>>>> Lisp, which has been called "clay for the mind" can >>>>>>>>>>>> build virtually anything that can be thought. The >>>>>>>>>>>> "joints" are also "of one's choosing" so one is >>>>>>>>>>>> both carver and "nature". >>>>>>>>>>>> >>>>>>>>>>>> Clearly the problem is no longer "the tools". >>>>>>>>>>>> *I* am the problem constraining the solution. >>>>>>>>>>>> Birthing this "new thing" is slow, difficult, and >>>>>>>>>>>> uncertain at best. >>>>>>>>>>>> >>>>>>>>>>>> Tim >>>>>>>>>>>> >>>>>>>>>>>> [0] Alexander, Christopher "Notes on the Synthesis >>>>>>>>>>>> of Form" Harvard University Press 1964 >>>>>>>>>>>> ISBN 0-674-62751-2 >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> On Sun, Oct 10, 2021 at 4:40 PM Tim Daly >>>>>>>>>>>> wrote: >>>>>>>>>>>> >>>>>>>>>>>>> Re: writing a paper... I'm not connected to Academia >>>>>>>>>>>>> so anything I'd write would never make it into print. >>>>>>>>>>>>> >>>>>>>>>>>>> "Language level parsing" is still a long way off. The talk >>>>>>>>>>>>> by Guy Steele [2] highlights some of the problems we >>>>>>>>>>>>> currently face using mathematical metanotation. >>>>>>>>>>>>> >>>>>>>>>>>>> For example, a professor I know at CCNY (City College >>>>>>>>>>>>> of New York) didn't understand Platzer's "funny >>>>>>>>>>>>> fraction notation" (proof judgements) despite being >>>>>>>>>>>>> an expert in Platzer's differential equations area. >>>>>>>>>>>>> >>>>>>>>>>>>> Notation matters and is not widely common. >>>>>>>>>>>>> >>>>>>>>>>>>> I spoke to Professor Black (in LTI) about using natural >>>>>>>>>>>>> language in the limited task of a human-robot cooperation >>>>>>>>>>>>> in changing a car tire. I looked at the current machine >>>>>>>>>>>>> learning efforts. They are no where near anything but >>>>>>>>>>>>> toy systems, taking too long to train and are too fragile. >>>>>>>>>>>>> >>>>>>>>>>>>> Instead I ended up using a combination of AIML [3] >>>>>>>>>>>>> (Artificial Intelligence Markup Language), the ALICE >>>>>>>>>>>>> Chatbot [4], Forgy's OPS5 rule based program [5], >>>>>>>>>>>>> and Fahlman's SCONE [6] knowledge base. It was >>>>>>>>>>>>> much less fragile in my limited domain problem. >>>>>>>>>>>>> >>>>>>>>>>>>> I have no idea how to extend any system to deal with >>>>>>>>>>>>> even undergraduate mathematics parsing. >>>>>>>>>>>>> >>>>>>>>>>>>> Nor do I have any idea how I would embed LEAN >>>>>>>>>>>>> knowledge into a SCONE database, although I >>>>>>>>>>>>> think the combination would be useful and interesting. >>>>>>>>>>>>> >>>>>>>>>>>>> I do believe that, in the limited area of computational >>>>>>>>>>>>> mathematics, we are capable of building robust, proven >>>>>>>>>>>>> systems that are quite general and extensible. As you >>>>>>>>>>>>> might have guessed I've given it a lot of thought over >>>>>>>>>>>>> the years :-) >>>>>>>>>>>>> >>>>>>>>>>>>> A mathematical language seems to need >6 components >>>>>>>>>>>>> >>>>>>>>>>>>> 1) We need some sort of a specification language, possibly >>>>>>>>>>>>> somewhat 'propositional' that introduces the assumptions >>>>>>>>>>>>> you mentioned (ref. your discussion of numbers being >>>>>>>>>>>>> abstract and ref. your discussion of relevant choice of >>>>>>>>>>>>> assumptions related to a problem). >>>>>>>>>>>>> >>>>>>>>>>>>> This is starting to show up in the hardware area (e.g. >>>>>>>>>>>>> Lamport's TLC[0]) >>>>>>>>>>>>> >>>>>>>>>>>>> Of course, specifications relate to proving programs >>>>>>>>>>>>> and, as you recall, I got a cold reception from the >>>>>>>>>>>>> LEAN community about using LEAN for program proofs. >>>>>>>>>>>>> >>>>>>>>>>>>> 2) We need "scaffolding". That is, we need a theory >>>>>>>>>>>>> that can be reduced to some implementable form >>>>>>>>>>>>> that provides concept-level structure. >>>>>>>>>>>>> >>>>>>>>>>>>> Axiom uses group theory for this. Axiom's "category" >>>>>>>>>>>>> structure has "Category" things like Ring. Claiming >>>>>>>>>>>>> to be a Ring brings in a lot of "Signatures" of functions >>>>>>>>>>>>> you have to implement to properly be a Ring. >>>>>>>>>>>>> >>>>>>>>>>>>> Scaffolding provides a firm mathematical basis for >>>>>>>>>>>>> design. It provides a link between the concept of a >>>>>>>>>>>>> Ring and the expectations you can assume when >>>>>>>>>>>>> you claim your "Domain" "is a Ring". Category >>>>>>>>>>>>> theory might provide similar structural scaffolding >>>>>>>>>>>>> (eventually... I'm still working on that thought garden) >>>>>>>>>>>>> >>>>>>>>>>>>> LEAN ought to have a textbook(s?) that structures >>>>>>>>>>>>> the world around some form of mathematics. It isn't >>>>>>>>>>>>> sufficient to say "undergraduate math" is the goal. >>>>>>>>>>>>> There needs to be some coherent organization so >>>>>>>>>>>>> people can bring ideas like Group Theory to the >>>>>>>>>>>>> organization. Which brings me to ... >>>>>>>>>>>>> >>>>>>>>>>>>> 3) We need "spreading". That is, we need to take >>>>>>>>>>>>> the various definitions and theorems in LEAN and >>>>>>>>>>>>> place them in their proper place in the scaffold. >>>>>>>>>>>>> >>>>>>>>>>>>> For example, the Ring category needs the definitions >>>>>>>>>>>>> and theorems for a Ring included in the code for the >>>>>>>>>>>>> Ring category. Similarly, the Commutative category >>>>>>>>>>>>> needs the definitions and theorems that underlie >>>>>>>>>>>>> "commutative" included in the code. >>>>>>>>>>>>> >>>>>>>>>>>>> That way, when you claim to be a "Commutative Ring" >>>>>>>>>>>>> you get both sets of definitions and theorems. That is, >>>>>>>>>>>>> the inheritance mechanism will collect up all of the >>>>>>>>>>>>> definitions and theorems and make them available >>>>>>>>>>>>> for proofs. >>>>>>>>>>>>> >>>>>>>>>>>>> I am looking at LEAN's definitions and theorems with >>>>>>>>>>>>> an eye to "spreading" them into the group scaffold of >>>>>>>>>>>>> Axiom. >>>>>>>>>>>>> >>>>>>>>>>>>> 4) We need "carriers" (Axiom calls them representations, >>>>>>>>>>>>> aka "REP"). REPs allow data structures to be defined >>>>>>>>>>>>> independent of the implementation. >>>>>>>>>>>>> >>>>>>>>>>>>> For example, Axiom can construct Polynomials that >>>>>>>>>>>>> have their coefficients in various forms of representation. >>>>>>>>>>>>> You can define "dense" (all coefficients in a list), >>>>>>>>>>>>> "sparse" (only non-zero coefficients), "recursive", etc. >>>>>>>>>>>>> >>>>>>>>>>>>> A "dense polynomial" and a "sparse polynomial" work >>>>>>>>>>>>> exactly the same way as far as the user is concerned. >>>>>>>>>>>>> They both implement the same set of functions. There >>>>>>>>>>>>> is only a difference of representation for efficiency and >>>>>>>>>>>>> this only affects the implementation of the functions, >>>>>>>>>>>>> not their use. >>>>>>>>>>>>> >>>>>>>>>>>>> Axiom "got this wrong" because it didn't sufficiently >>>>>>>>>>>>> separate the REP from the "Domain". I plan to fix this. >>>>>>>>>>>>> >>>>>>>>>>>>> LEAN ought to have a "data structures" subtree that >>>>>>>>>>>>> has all of the definitions and axioms for all of the >>>>>>>>>>>>> existing data structures (e.g. Red-Black trees). This >>>>>>>>>>>>> would be a good undergraduate project. >>>>>>>>>>>>> >>>>>>>>>>>>> 5) We need "Domains" (in Axiom speak). That is, we >>>>>>>>>>>>> need a box that holds all of the functions that implement >>>>>>>>>>>>> a "Domain". For example, a "Polynomial Domain" would >>>>>>>>>>>>> hold all of the functions for manipulating polynomials >>>>>>>>>>>>> (e.g polynomial multiplication). The "Domain" box >>>>>>>>>>>>> is a dependent type that: >>>>>>>>>>>>> >>>>>>>>>>>>> A) has an argument list of "Categories" that this "Domain" >>>>>>>>>>>>> box inherits. Thus, the "Integer Domain" inherits >>>>>>>>>>>>> the definitions and axioms from "Commutative" >>>>>>>>>>>>> >>>>>>>>>>>>> Functions in the "Domain" box can now assume >>>>>>>>>>>>> and use the properties of being commutative. Proofs >>>>>>>>>>>>> of functions in this domain can use the definitions >>>>>>>>>>>>> and proofs about being commutative. >>>>>>>>>>>>> >>>>>>>>>>>>> B) contains an argument that specifies the "REP" >>>>>>>>>>>>> (aka, the carrier). That way you get all of the >>>>>>>>>>>>> functions associated with the data structure >>>>>>>>>>>>> available for use in the implementation. >>>>>>>>>>>>> >>>>>>>>>>>>> Functions in the Domain box can use all of >>>>>>>>>>>>> the definitions and axioms about the representation >>>>>>>>>>>>> (e.g. NonNegativeIntegers are always positive) >>>>>>>>>>>>> >>>>>>>>>>>>> C) contains local "spread" definitions and axioms >>>>>>>>>>>>> that can be used in function proofs. >>>>>>>>>>>>> >>>>>>>>>>>>> For example, a "Square Matrix" domain would >>>>>>>>>>>>> have local axioms that state that the matrix is >>>>>>>>>>>>> always square. Thus, functions in that box could >>>>>>>>>>>>> use these additional definitions and axioms in >>>>>>>>>>>>> function proofs. >>>>>>>>>>>>> >>>>>>>>>>>>> D) contains local state. A "Square Matrix" domain >>>>>>>>>>>>> would be constructed as a dependent type that >>>>>>>>>>>>> specified the size of the square (e.g. a 2x2 >>>>>>>>>>>>> matrix would have '2' as a dependent parameter. >>>>>>>>>>>>> >>>>>>>>>>>>> E) contains implementations of inherited functions. >>>>>>>>>>>>> >>>>>>>>>>>>> A "Category" could have a signature for a GCD >>>>>>>>>>>>> function and the "Category" could have a default >>>>>>>>>>>>> implementation. However, the "Domain" could >>>>>>>>>>>>> have a locally more efficient implementation which >>>>>>>>>>>>> overrides the inherited implementation. >>>>>>>>>>>>> >>>>>>>>>>>>> Axiom has about 20 GCD implementations that >>>>>>>>>>>>> differ locally from the default in the category. They >>>>>>>>>>>>> use properties known locally to be more efficient. >>>>>>>>>>>>> >>>>>>>>>>>>> F) contains local function signatures. >>>>>>>>>>>>> >>>>>>>>>>>>> A "Domain" gives the user more and more unique >>>>>>>>>>>>> functions. The signature have associated >>>>>>>>>>>>> "pre- and post- conditions" that can be used >>>>>>>>>>>>> as assumptions in the function proofs. >>>>>>>>>>>>> >>>>>>>>>>>>> Some of the user-available functions are only >>>>>>>>>>>>> visible if the dependent type would allow them >>>>>>>>>>>>> to exist. For example, a general Matrix domain >>>>>>>>>>>>> would have fewer user functions that a Square >>>>>>>>>>>>> Matrix domain. >>>>>>>>>>>>> >>>>>>>>>>>>> In addition, local "helper" functions need their >>>>>>>>>>>>> own signatures that are not user visible. >>>>>>>>>>>>> >>>>>>>>>>>>> G) the function implementation for each signature. >>>>>>>>>>>>> >>>>>>>>>>>>> This is obviously where all the magic happens >>>>>>>>>>>>> >>>>>>>>>>>>> H) the proof of each function. >>>>>>>>>>>>> >>>>>>>>>>>>> This is where I'm using LEAN. >>>>>>>>>>>>> >>>>>>>>>>>>> Every function has a proof. That proof can use >>>>>>>>>>>>> all of the definitions and axioms inherited from >>>>>>>>>>>>> the "Category", "Representation", the "Domain >>>>>>>>>>>>> Local", and the signature pre- and post- >>>>>>>>>>>>> conditions. >>>>>>>>>>>>> >>>>>>>>>>>>> I) literature links. Algorithms must contain a link >>>>>>>>>>>>> to at least one literature reference. Of course, >>>>>>>>>>>>> since everything I do is a Literate Program >>>>>>>>>>>>> this is obviously required. Knuth said so :-) >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> LEAN ought to have "books" or "pamphlets" that >>>>>>>>>>>>> bring together all of this information for a domain >>>>>>>>>>>>> such as Square Matrices. That way a user can >>>>>>>>>>>>> find all of the related ideas, available functions, >>>>>>>>>>>>> and their corresponding proofs in one place. >>>>>>>>>>>>> >>>>>>>>>>>>> 6) User level presentation. >>>>>>>>>>>>> >>>>>>>>>>>>> This is where the systems can differ significantly. >>>>>>>>>>>>> Axiom and LEAN both have GCD but they use >>>>>>>>>>>>> that for different purposes. >>>>>>>>>>>>> >>>>>>>>>>>>> I'm trying to connect LEAN's GCD and Axiom's GCD >>>>>>>>>>>>> so there is a "computational mathematics" idea that >>>>>>>>>>>>> allows the user to connect proofs and implementations. >>>>>>>>>>>>> >>>>>>>>>>>>> 7) Trust >>>>>>>>>>>>> >>>>>>>>>>>>> Unlike everything else, computational mathematics >>>>>>>>>>>>> can have proven code that gives various guarantees. >>>>>>>>>>>>> >>>>>>>>>>>>> I have been working on this aspect for a while. >>>>>>>>>>>>> I refer to it as trust "down to the metal" The idea is >>>>>>>>>>>>> that a proof of the GCD function and the implementation >>>>>>>>>>>>> of the GCD function get packaged into the ELF format. >>>>>>>>>>>>> (proof carrying code). When the GCD algorithm executes >>>>>>>>>>>>> on the CPU, the GCD proof is run through the LEAN >>>>>>>>>>>>> proof checker on an FPGA in parallel. >>>>>>>>>>>>> >>>>>>>>>>>>> (I just recently got a PYNQ Xilinx board [1] with a CPU >>>>>>>>>>>>> and FPGA together. I'm trying to implement the LEAN >>>>>>>>>>>>> proof checker on the FPGA). >>>>>>>>>>>>> >>>>>>>>>>>>> We are on the cusp of a revolution in computational >>>>>>>>>>>>> mathematics. But the two pillars (proof and computer >>>>>>>>>>>>> algebra) need to get know each other. >>>>>>>>>>>>> >>>>>>>>>>>>> Tim >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> [0] Lamport, Leslie "Chapter on TLA+" >>>>>>>>>>>>> in "Software Specification Methods" >>>>>>>>>>>>> https://www.springer.com/gp/book/9781852333539 >>>>>>>>>>>>> (I no longer have CMU library access or I'd send you >>>>>>>>>>>>> the book PDF) >>>>>>>>>>>>> >>>>>>>>>>>>> [1] https://www.tul.com.tw/productspynq-z2.html >>>>>>>>>>>>> >>>>>>>>>>>>> [2] https://www.youtube.com/watch?v=3DdCuZkaaou0Q >>>>>>>>>>>>> >>>>>>>>>>>>> [3] "ARTIFICIAL INTELLIGENCE MARKUP LANGUAGE" >>>>>>>>>>>>> https://arxiv.org/pdf/1307.3091.pdf >>>>>>>>>>>>> >>>>>>>>>>>>> [4] ALICE Chatbot >>>>>>>>>>>>> >>>>>>>>>>>>> http://www.scielo.org.mx/pdf/cys/v19n4/1405-5546-cys-19-04-00= 625.pdf >>>>>>>>>>>>> >>>>>>>>>>>>> [5] OPS5 User Manual >>>>>>>>>>>>> >>>>>>>>>>>>> https://kilthub.cmu.edu/articles/journal_contribution/OPS5_us= er_s_manual/6608090/1 >>>>>>>>>>>>> >>>>>>>>>>>>> [6] Scott Fahlman "SCONE" >>>>>>>>>>>>> http://www.cs.cmu.edu/~sef/scone/ >>>>>>>>>>>>> >>>>>>>>>>>>> On 9/27/21, Tim Daly wrote: >>>>>>>>>>>>> > I have tried to maintain a list of names of people who have >>>>>>>>>>>>> > helped Axiom, going all the way back to the pre-Scratchpad >>>>>>>>>>>>> > days. The names are listed at the beginning of each book. >>>>>>>>>>>>> > I also maintain a bibliography of publications I've read or >>>>>>>>>>>>> > that have had an indirect influence on Axiom. >>>>>>>>>>>>> > >>>>>>>>>>>>> > Credit is "the coin of the realm". It is easy to share and >>>>>>>>>>>>> wrong >>>>>>>>>>>>> > to ignore. It is especially damaging to those in Academia w= ho >>>>>>>>>>>>> > are affected by credit and citations in publications. >>>>>>>>>>>>> > >>>>>>>>>>>>> > Apparently I'm not the only person who feels that way. The >>>>>>>>>>>>> ACM >>>>>>>>>>>>> > Turing award seems to have ignored a lot of work: >>>>>>>>>>>>> > >>>>>>>>>>>>> > Scientific Integrity, the 2021 Turing Lecture, and the 2018 >>>>>>>>>>>>> Turing >>>>>>>>>>>>> > Award for Deep Learning >>>>>>>>>>>>> > >>>>>>>>>>>>> https://people.idsia.ch/~juergen/scientific-integrity-turing-= award-deep-learning.html >>>>>>>>>>>>> > >>>>>>>>>>>>> > I worked on an AI problem at IBM Research called Ketazolam. >>>>>>>>>>>>> > (https://en.wikipedia.org/wiki/Ketazolam). The idea was to >>>>>>>>>>>>> recognize >>>>>>>>>>>>> > and associated 3D chemical drawings with their drug >>>>>>>>>>>>> counterparts. >>>>>>>>>>>>> > I used Rumelhart, and McClelland's books. These books >>>>>>>>>>>>> contained >>>>>>>>>>>>> > quite a few ideas that seem to be "new and innovative" amon= g >>>>>>>>>>>>> the >>>>>>>>>>>>> > machine learning crowd... but the books are from 1987. I >>>>>>>>>>>>> don't believe >>>>>>>>>>>>> > I've seen these books mentioned in any recent bibliography. >>>>>>>>>>>>> > >>>>>>>>>>>>> https://mitpress.mit.edu/books/parallel-distributed-processin= g-volume-1 >>>>>>>>>>>>> > >>>>>>>>>>>>> > >>>>>>>>>>>>> > >>>>>>>>>>>>> > >>>>>>>>>>>>> > On 9/27/21, Tim Daly wrote: >>>>>>>>>>>>> >> Greg Wilson asked "How Reliable is Scientific Software?" >>>>>>>>>>>>> >> >>>>>>>>>>>>> https://neverworkintheory.org/2021/09/25/how-reliable-is-scie= ntific-software.html >>>>>>>>>>>>> >> >>>>>>>>>>>>> >> which is a really interesting read. For example" >>>>>>>>>>>>> >> >>>>>>>>>>>>> >> [Hatton1994], is now a quarter of a century old, but its >>>>>>>>>>>>> conclusions >>>>>>>>>>>>> >> are still fresh. The authors fed the same data into nine >>>>>>>>>>>>> commercial >>>>>>>>>>>>> >> geophysical software packages and compared the results; >>>>>>>>>>>>> they found >>>>>>>>>>>>> >> that, "numerical disagreement grows at around the rate of >>>>>>>>>>>>> 1% in >>>>>>>>>>>>> >> average absolute difference per 4000 fines of implemented >>>>>>>>>>>>> code, and, >>>>>>>>>>>>> >> even worse, the nature of the disagreement is nonrandom" >>>>>>>>>>>>> (i.e., the >>>>>>>>>>>>> >> authors of different packages make similar mistakes). >>>>>>>>>>>>> >> >>>>>>>>>>>>> >> >>>>>>>>>>>>> >> On 9/26/21, Tim Daly wrote: >>>>>>>>>>>>> >>> I should note that the lastest board I've just unboxed >>>>>>>>>>>>> >>> (a PYNQ-Z2) is a Zynq Z-7020 chip from Xilinx (AMD). >>>>>>>>>>>>> >>> >>>>>>>>>>>>> >>> What makes it interesting is that it contains 2 hard >>>>>>>>>>>>> >>> core processors and an FPGA, connected by 9 paths >>>>>>>>>>>>> >>> for communication. The processors can be run >>>>>>>>>>>>> >>> independently so there is the possibility of a parallel >>>>>>>>>>>>> >>> version of some Axiom algorithms (assuming I had >>>>>>>>>>>>> >>> the time, which I don't). >>>>>>>>>>>>> >>> >>>>>>>>>>>>> >>> Previously either the hard (physical) processor was >>>>>>>>>>>>> >>> separate from the FPGA with minimal communication >>>>>>>>>>>>> >>> or the soft core processor had to be created in the FPGA >>>>>>>>>>>>> >>> and was much slower. >>>>>>>>>>>>> >>> >>>>>>>>>>>>> >>> Now the two have been combined in a single chip. >>>>>>>>>>>>> >>> That means that my effort to run a proof checker on >>>>>>>>>>>>> >>> the FPGA and the algorithm on the CPU just got to >>>>>>>>>>>>> >>> the point where coordination is much easier. >>>>>>>>>>>>> >>> >>>>>>>>>>>>> >>> Now all I have to do is figure out how to program this >>>>>>>>>>>>> >>> beast. >>>>>>>>>>>>> >>> >>>>>>>>>>>>> >>> There is no such thing as a simple job. >>>>>>>>>>>>> >>> >>>>>>>>>>>>> >>> Tim >>>>>>>>>>>>> >>> >>>>>>>>>>>>> >>> >>>>>>>>>>>>> >>> On 9/26/21, Tim Daly wrote: >>>>>>>>>>>>> >>>> I'm familiar with most of the traditional approaches >>>>>>>>>>>>> >>>> like Theorema. The bibliography contains most of the >>>>>>>>>>>>> >>>> more interesting sources. [0] >>>>>>>>>>>>> >>>> >>>>>>>>>>>>> >>>> There is a difference between traditional approaches to >>>>>>>>>>>>> >>>> connecting computer algebra and proofs and my approach. >>>>>>>>>>>>> >>>> >>>>>>>>>>>>> >>>> Proving an algorithm, like the GCD, in Axiom is hard. >>>>>>>>>>>>> >>>> There are many GCDs (e.g. NNI vs POLY) and there >>>>>>>>>>>>> >>>> are theorems and proofs passed at runtime in the >>>>>>>>>>>>> >>>> arguments of the newly constructed domains. This >>>>>>>>>>>>> >>>> involves a lot of dependent type theory and issues of >>>>>>>>>>>>> >>>> compile time / runtime argument evaluation. The issues >>>>>>>>>>>>> >>>> that arise are difficult and still being debated in the >>>>>>>>>>>>> type >>>>>>>>>>>>> >>>> theory community. >>>>>>>>>>>>> >>>> >>>>>>>>>>>>> >>>> I am putting the definitions, theorems, and proofs (DTP) >>>>>>>>>>>>> >>>> directly into the category/domain hierarchy. Each catego= ry >>>>>>>>>>>>> >>>> will have the DTP specific to it. That way a commutative >>>>>>>>>>>>> >>>> domain will inherit a commutative theorem and a >>>>>>>>>>>>> >>>> non-commutative domain will not. >>>>>>>>>>>>> >>>> >>>>>>>>>>>>> >>>> Each domain will have additional DTPs associated with >>>>>>>>>>>>> >>>> the domain (e.g. NNI vs Integer) as well as any DTPs >>>>>>>>>>>>> >>>> it inherits from the category hierarchy. Functions in th= e >>>>>>>>>>>>> >>>> domain will have associated DTPs. >>>>>>>>>>>>> >>>> >>>>>>>>>>>>> >>>> A function to be proven will then inherit all of the >>>>>>>>>>>>> relevant >>>>>>>>>>>>> >>>> DTPs. The proof will be attached to the function and >>>>>>>>>>>>> >>>> both will be sent to the hardware (proof-carrying code). >>>>>>>>>>>>> >>>> >>>>>>>>>>>>> >>>> The proof checker, running on a field programmable >>>>>>>>>>>>> >>>> gate array (FPGA), will be checked at runtime in >>>>>>>>>>>>> >>>> parallel with the algorithm running on the CPU >>>>>>>>>>>>> >>>> (aka "trust down to the metal"). (Note that Intel >>>>>>>>>>>>> >>>> and AMD have built CPU/FPGA combined chips, >>>>>>>>>>>>> >>>> currently only available in the cloud.) >>>>>>>>>>>>> >>>> >>>>>>>>>>>>> >>>> >>>>>>>>>>>>> >>>> >>>>>>>>>>>>> >>>> I am (slowly) making progress on the research. >>>>>>>>>>>>> >>>> >>>>>>>>>>>>> >>>> I have the hardware and nearly have the proof >>>>>>>>>>>>> >>>> checker from LEAN running on my FPGA. >>>>>>>>>>>>> >>>> >>>>>>>>>>>>> >>>> I'm in the process of spreading the DTPs from >>>>>>>>>>>>> >>>> LEAN across the category/domain hierarchy. >>>>>>>>>>>>> >>>> >>>>>>>>>>>>> >>>> The current Axiom build extracts all of the functions >>>>>>>>>>>>> >>>> but does not yet have the DTPs. >>>>>>>>>>>>> >>>> >>>>>>>>>>>>> >>>> I have to restructure the system, including the compiler >>>>>>>>>>>>> >>>> and interpreter to parse and inherit the DTPs. I >>>>>>>>>>>>> >>>> have some of that code but only some of the code >>>>>>>>>>>>> >>>> has been pushed to the repository (volume 15) but >>>>>>>>>>>>> >>>> that is rather trivial, out of date, and incomplete. >>>>>>>>>>>>> >>>> >>>>>>>>>>>>> >>>> I'm clearly not smart enough to prove the Risch >>>>>>>>>>>>> >>>> algorithm and its associated machinery but the needed >>>>>>>>>>>>> >>>> definitions and theorems will be available to someone >>>>>>>>>>>>> >>>> who wants to try. >>>>>>>>>>>>> >>>> >>>>>>>>>>>>> >>>> [0] >>>>>>>>>>>>> https://github.com/daly/PDFS/blob/master/bookvolbib.pdf >>>>>>>>>>>>> >>>> >>>>>>>>>>>>> >>>> >>>>>>>>>>>>> >>>> On 8/19/21, Tim Daly wrote: >>>>>>>>>>>>> >>>>> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D >>>>>>>>>>>>> >>>>> >>>>>>>>>>>>> >>>>> REVIEW (Axiom on WSL2 Windows) >>>>>>>>>>>>> >>>>> >>>>>>>>>>>>> >>>>> >>>>>>>>>>>>> >>>>> So the steps to run Axiom from a Windows desktop >>>>>>>>>>>>> >>>>> >>>>>>>>>>>>> >>>>> 1 Windows) install XMing on Windows for X11 server >>>>>>>>>>>>> >>>>> >>>>>>>>>>>>> >>>>> http://www.straightrunning.com/XmingNotes/ >>>>>>>>>>>>> >>>>> >>>>>>>>>>>>> >>>>> 2 WSL2) Install Axiom in WSL2 >>>>>>>>>>>>> >>>>> >>>>>>>>>>>>> >>>>> sudo apt install axiom >>>>>>>>>>>>> >>>>> >>>>>>>>>>>>> >>>>> 3 WSL2) modify /usr/bin/axiom to fix the bug: >>>>>>>>>>>>> >>>>> (someone changed the axiom startup script. >>>>>>>>>>>>> >>>>> It won't work on WSL2. I don't know who or >>>>>>>>>>>>> >>>>> how to get it fixed). >>>>>>>>>>>>> >>>>> >>>>>>>>>>>>> >>>>> sudo emacs /usr/bin/axiom >>>>>>>>>>>>> >>>>> >>>>>>>>>>>>> >>>>> (split the line into 3 and add quote marks) >>>>>>>>>>>>> >>>>> >>>>>>>>>>>>> >>>>> export SPADDEFAULT=3D/usr/local/axiom/mnt/linux >>>>>>>>>>>>> >>>>> export AXIOM=3D/usr/lib/axiom-20170501 >>>>>>>>>>>>> >>>>> export "PATH=3D/usr/lib/axiom-20170501/bin:$PATH" >>>>>>>>>>>>> >>>>> >>>>>>>>>>>>> >>>>> 4 WSL2) create a .axiom.input file to include startup >>>>>>>>>>>>> cmds: >>>>>>>>>>>>> >>>>> >>>>>>>>>>>>> >>>>> emacs .axiom.input >>>>>>>>>>>>> >>>>> >>>>>>>>>>>>> >>>>> )cd "/mnt/c/yourpath" >>>>>>>>>>>>> >>>>> )sys pwd >>>>>>>>>>>>> >>>>> >>>>>>>>>>>>> >>>>> 5 WSL2) create a "myaxiom" command that sets the >>>>>>>>>>>>> >>>>> DISPLAY variable and starts axiom >>>>>>>>>>>>> >>>>> >>>>>>>>>>>>> >>>>> emacs myaxiom >>>>>>>>>>>>> >>>>> >>>>>>>>>>>>> >>>>> #! /bin/bash >>>>>>>>>>>>> >>>>> export DISPLAY=3D:0.0 >>>>>>>>>>>>> >>>>> axiom >>>>>>>>>>>>> >>>>> >>>>>>>>>>>>> >>>>> 6 WSL2) put it in the /usr/bin directory >>>>>>>>>>>>> >>>>> >>>>>>>>>>>>> >>>>> chmod +x myaxiom >>>>>>>>>>>>> >>>>> sudo cp myaxiom /usr/bin/myaxiom >>>>>>>>>>>>> >>>>> >>>>>>>>>>>>> >>>>> 7 WINDOWS) start the X11 server >>>>>>>>>>>>> >>>>> >>>>>>>>>>>>> >>>>> (XMing XLaunch Icon on your desktop) >>>>>>>>>>>>> >>>>> >>>>>>>>>>>>> >>>>> 8 WINDOWS) run myaxiom from PowerShell >>>>>>>>>>>>> >>>>> (this should start axiom with graphics available) >>>>>>>>>>>>> >>>>> >>>>>>>>>>>>> >>>>> wsl myaxiom >>>>>>>>>>>>> >>>>> >>>>>>>>>>>>> >>>>> 8 WINDOWS) make a PowerShell desktop >>>>>>>>>>>>> >>>>> >>>>>>>>>>>>> >>>>> >>>>>>>>>>>>> https://superuser.com/questions/886951/run-powershell-script-= when-you-open-powershell >>>>>>>>>>>>> >>>>> >>>>>>>>>>>>> >>>>> Tim >>>>>>>>>>>>> >>>>> >>>>>>>>>>>>> >>>>> On 8/13/21, Tim Daly wrote: >>>>>>>>>>>>> >>>>>> A great deal of thought is directed toward making the >>>>>>>>>>>>> SANE version >>>>>>>>>>>>> >>>>>> of Axiom as flexible as possible, decoupling mechanism >>>>>>>>>>>>> from theory. >>>>>>>>>>>>> >>>>>> >>>>>>>>>>>>> >>>>>> An interesting publication by Brian Cantwell Smith [0]= , >>>>>>>>>>>>> "Reflection >>>>>>>>>>>>> >>>>>> and Semantics in LISP" seems to contain interesting >>>>>>>>>>>>> ideas related >>>>>>>>>>>>> >>>>>> to our goal. Of particular interest is the ability to >>>>>>>>>>>>> reason about >>>>>>>>>>>>> >>>>>> and >>>>>>>>>>>>> >>>>>> perform self-referential manipulations. In a >>>>>>>>>>>>> dependently-typed >>>>>>>>>>>>> >>>>>> system it seems interesting to be able "adapt" code to >>>>>>>>>>>>> handle >>>>>>>>>>>>> >>>>>> run-time computed arguments to dependent functions. Th= e >>>>>>>>>>>>> abstract: >>>>>>>>>>>>> >>>>>> >>>>>>>>>>>>> >>>>>> "We show how a computational system can be >>>>>>>>>>>>> constructed to >>>>>>>>>>>>> >>>>>> "reason", >>>>>>>>>>>>> >>>>>> effectively >>>>>>>>>>>>> >>>>>> and consequentially, about its own inferential >>>>>>>>>>>>> processes. The >>>>>>>>>>>>> >>>>>> analysis proceeds in two >>>>>>>>>>>>> >>>>>> parts. First, we consider the general question of >>>>>>>>>>>>> computational >>>>>>>>>>>>> >>>>>> semantics, rejecting >>>>>>>>>>>>> >>>>>> traditional approaches, and arguing that the >>>>>>>>>>>>> declarative and >>>>>>>>>>>>> >>>>>> procedural aspects of >>>>>>>>>>>>> >>>>>> computational symbols (what they stand for, and wha= t >>>>>>>>>>>>> behaviour >>>>>>>>>>>>> >>>>>> they >>>>>>>>>>>>> >>>>>> engender) should be >>>>>>>>>>>>> >>>>>> analysed independently, in order that they may be >>>>>>>>>>>>> coherently >>>>>>>>>>>>> >>>>>> related. Second, we >>>>>>>>>>>>> >>>>>> investigate self-referential behavior in >>>>>>>>>>>>> computational processes, >>>>>>>>>>>>> >>>>>> and show how to embed an >>>>>>>>>>>>> >>>>>> effective procedural model of a computational >>>>>>>>>>>>> calculus within that >>>>>>>>>>>>> >>>>>> calculus (a model not >>>>>>>>>>>>> >>>>>> unlike a meta-circular interpreter, but connected t= o >>>>>>>>>>>>> the >>>>>>>>>>>>> >>>>>> fundamental operations of the >>>>>>>>>>>>> >>>>>> machine in such a way as to provide, at any point i= n >>>>>>>>>>>>> a >>>>>>>>>>>>> >>>>>> computation, >>>>>>>>>>>>> >>>>>> fully articulated >>>>>>>>>>>>> >>>>>> descriptions of the state of that computation, for >>>>>>>>>>>>> inspection and >>>>>>>>>>>>> >>>>>> possible modification). In >>>>>>>>>>>>> >>>>>> terms of the theories that result from these >>>>>>>>>>>>> investigations, we >>>>>>>>>>>>> >>>>>> present a general architecture >>>>>>>>>>>>> >>>>>> for procedurally reflective processes, able to shif= t >>>>>>>>>>>>> smoothly >>>>>>>>>>>>> >>>>>> between dealing with a given >>>>>>>>>>>>> >>>>>> subject domain, and dealing with their own reasonin= g >>>>>>>>>>>>> processes >>>>>>>>>>>>> >>>>>> over >>>>>>>>>>>>> >>>>>> that domain. >>>>>>>>>>>>> >>>>>> >>>>>>>>>>>>> >>>>>> An instance of the general solution is worked out i= n >>>>>>>>>>>>> the context >>>>>>>>>>>>> >>>>>> of >>>>>>>>>>>>> >>>>>> an applicative >>>>>>>>>>>>> >>>>>> language. Specifically, we present three successive >>>>>>>>>>>>> dialects of >>>>>>>>>>>>> >>>>>> LISP: 1-LISP, a distillation of >>>>>>>>>>>>> >>>>>> current practice, for comparison purposes; 2-LISP, = a >>>>>>>>>>>>> dialect >>>>>>>>>>>>> >>>>>> constructed in terms of our >>>>>>>>>>>>> >>>>>> rationalised semantics, in which the concept of >>>>>>>>>>>>> evaluation is >>>>>>>>>>>>> >>>>>> rejected in favour of >>>>>>>>>>>>> >>>>>> independent notions of simplification and reference= , >>>>>>>>>>>>> and in which >>>>>>>>>>>>> >>>>>> the respective categories >>>>>>>>>>>>> >>>>>> of notation, structure, semantics, and behaviour ar= e >>>>>>>>>>>>> strictly >>>>>>>>>>>>> >>>>>> aligned; and 3-LISP, an >>>>>>>>>>>>> >>>>>> extension of 2-LISP endowed with reflective powers.= " >>>>>>>>>>>>> >>>>>> >>>>>>>>>>>>> >>>>>> Axiom SANE builds dependent types on the fly. The >>>>>>>>>>>>> ability to access >>>>>>>>>>>>> >>>>>> both the refection >>>>>>>>>>>>> >>>>>> of the tower of algebra and the reflection of the towe= r >>>>>>>>>>>>> of proofs at >>>>>>>>>>>>> >>>>>> the time of construction >>>>>>>>>>>>> >>>>>> makes the construction of a new domain or specific >>>>>>>>>>>>> algorithm easier >>>>>>>>>>>>> >>>>>> and more general. >>>>>>>>>>>>> >>>>>> >>>>>>>>>>>>> >>>>>> This is of particular interest because one of the >>>>>>>>>>>>> efforts is to build >>>>>>>>>>>>> >>>>>> "all the way down to the >>>>>>>>>>>>> >>>>>> metal". If each layer is constructed on top of previou= s >>>>>>>>>>>>> proven layers >>>>>>>>>>>>> >>>>>> and the new layer >>>>>>>>>>>>> >>>>>> can "reach below" to lower layers then the tower of >>>>>>>>>>>>> layers can be >>>>>>>>>>>>> >>>>>> built without duplication. >>>>>>>>>>>>> >>>>>> >>>>>>>>>>>>> >>>>>> Tim >>>>>>>>>>>>> >>>>>> >>>>>>>>>>>>> >>>>>> [0], Smith, Brian Cantwell "Reflection and Semantics i= n >>>>>>>>>>>>> LISP" >>>>>>>>>>>>> >>>>>> POPL '84: Proceedings of the 11th ACM SIGACT-SIGPLAN >>>>>>>>>>>>> >>>>>> ymposium on Principles of programming languagesJanuary= 1 >>>>>>>>>>>>> >>>>>> 984 Pages 23=E2=80=9335https://doi.org/10.1145/800017.= 800513 >>>>>>>>>>>>> >>>>>> >>>>>>>>>>>>> >>>>>> On 6/29/21, Tim Daly wrote: >>>>>>>>>>>>> >>>>>>> Having spent time playing with hardware it is >>>>>>>>>>>>> perfectly clear that >>>>>>>>>>>>> >>>>>>> future computational mathematics efforts need to adap= t >>>>>>>>>>>>> to using >>>>>>>>>>>>> >>>>>>> parallel processing. >>>>>>>>>>>>> >>>>>>> >>>>>>>>>>>>> >>>>>>> I've spent a fair bit of time thinking about >>>>>>>>>>>>> structuring Axiom to >>>>>>>>>>>>> >>>>>>> be parallel. Most past efforts have tried to focus on >>>>>>>>>>>>> making a >>>>>>>>>>>>> >>>>>>> particular algorithm parallel, such as a matrix >>>>>>>>>>>>> multiply. >>>>>>>>>>>>> >>>>>>> >>>>>>>>>>>>> >>>>>>> But I think that it might be more effective to make >>>>>>>>>>>>> each domain >>>>>>>>>>>>> >>>>>>> run in parallel. A computation crosses multiple >>>>>>>>>>>>> domains so a >>>>>>>>>>>>> >>>>>>> particular computation could involve multiple paralle= l >>>>>>>>>>>>> copies. >>>>>>>>>>>>> >>>>>>> >>>>>>>>>>>>> >>>>>>> For example, computing the Cylindrical Algebraic >>>>>>>>>>>>> Decomposition >>>>>>>>>>>>> >>>>>>> could recursively decompose the plane. Indeed, any >>>>>>>>>>>>> tree-recursive >>>>>>>>>>>>> >>>>>>> algorithm could be run in parallel "in the large" by >>>>>>>>>>>>> creating new >>>>>>>>>>>>> >>>>>>> running copies of the domain for each sub-problem. >>>>>>>>>>>>> >>>>>>> >>>>>>>>>>>>> >>>>>>> So the question becomes, how does one manage this? >>>>>>>>>>>>> >>>>>>> >>>>>>>>>>>>> >>>>>>> A similar problem occurs in robotics where one could >>>>>>>>>>>>> have multiple >>>>>>>>>>>>> >>>>>>> wheels, arms, propellers, etc. that need to act >>>>>>>>>>>>> independently but >>>>>>>>>>>>> >>>>>>> in coordination. >>>>>>>>>>>>> >>>>>>> >>>>>>>>>>>>> >>>>>>> The robot solution uses ROS2. The three ideas are >>>>>>>>>>>>> ROSCORE, >>>>>>>>>>>>> >>>>>>> TOPICS with publish/subscribe, and SERVICES with >>>>>>>>>>>>> request/response. >>>>>>>>>>>>> >>>>>>> These are communication paths defined between >>>>>>>>>>>>> processes. >>>>>>>>>>>>> >>>>>>> >>>>>>>>>>>>> >>>>>>> ROS2 has a "roscore" which is basically a phonebook o= f >>>>>>>>>>>>> "topics". >>>>>>>>>>>>> >>>>>>> Any process can create or look up the current active >>>>>>>>>>>>> topics. eq: >>>>>>>>>>>>> >>>>>>> >>>>>>>>>>>>> >>>>>>> rosnode list >>>>>>>>>>>>> >>>>>>> >>>>>>>>>>>>> >>>>>>> TOPICS: >>>>>>>>>>>>> >>>>>>> >>>>>>>>>>>>> >>>>>>> Any process can PUBLISH a topic (which is basically a >>>>>>>>>>>>> typed data >>>>>>>>>>>>> >>>>>>> structure), e.g the topic /hw with the String data >>>>>>>>>>>>> "Hello World". >>>>>>>>>>>>> >>>>>>> eg: >>>>>>>>>>>>> >>>>>>> >>>>>>>>>>>>> >>>>>>> rostopic pub /hw std_msgs/String "Hello, World" >>>>>>>>>>>>> >>>>>>> >>>>>>>>>>>>> >>>>>>> Any process can SUBSCRIBE to a topic, such as /hw, an= d >>>>>>>>>>>>> get a >>>>>>>>>>>>> >>>>>>> copy of the data. eg: >>>>>>>>>>>>> >>>>>>> >>>>>>>>>>>>> >>>>>>> rostopic echo /hw =3D=3D> "Hello, World" >>>>>>>>>>>>> >>>>>>> >>>>>>>>>>>>> >>>>>>> Publishers talk, subscribers listen. >>>>>>>>>>>>> >>>>>>> >>>>>>>>>>>>> >>>>>>> >>>>>>>>>>>>> >>>>>>> SERVICES: >>>>>>>>>>>>> >>>>>>> >>>>>>>>>>>>> >>>>>>> Any process can make a REQUEST of a SERVICE and get a >>>>>>>>>>>>> RESPONSE. >>>>>>>>>>>>> >>>>>>> This is basically a remote function call. >>>>>>>>>>>>> >>>>>>> >>>>>>>>>>>>> >>>>>>> >>>>>>>>>>>>> >>>>>>> >>>>>>>>>>>>> >>>>>>> Axiom in parallel? >>>>>>>>>>>>> >>>>>>> >>>>>>>>>>>>> >>>>>>> So domains could run, each in its own process. It >>>>>>>>>>>>> could provide >>>>>>>>>>>>> >>>>>>> services, one for each function. Any other process >>>>>>>>>>>>> could request >>>>>>>>>>>>> >>>>>>> a computation and get the result as a response. >>>>>>>>>>>>> Domains could >>>>>>>>>>>>> >>>>>>> request services from other domains, either waiting >>>>>>>>>>>>> for responses >>>>>>>>>>>>> >>>>>>> or continuing while the response is being computed. >>>>>>>>>>>>> >>>>>>> >>>>>>>>>>>>> >>>>>>> The output could be sent anywhere, to a terminal, to = a >>>>>>>>>>>>> browser, >>>>>>>>>>>>> >>>>>>> to a network, or to another process using the >>>>>>>>>>>>> publish/subscribe >>>>>>>>>>>>> >>>>>>> protocol, potentially all at the same time since ther= e >>>>>>>>>>>>> can be many >>>>>>>>>>>>> >>>>>>> subscribers to a topic. >>>>>>>>>>>>> >>>>>>> >>>>>>>>>>>>> >>>>>>> Available domains could be dynamically added by >>>>>>>>>>>>> announcing >>>>>>>>>>>>> >>>>>>> themselves as new "topics" and could be dynamically >>>>>>>>>>>>> looked-up >>>>>>>>>>>>> >>>>>>> at runtime. >>>>>>>>>>>>> >>>>>>> >>>>>>>>>>>>> >>>>>>> This structure allows function-level / domain-level >>>>>>>>>>>>> parallelism. >>>>>>>>>>>>> >>>>>>> It is very effective in the robot world and I think i= t >>>>>>>>>>>>> might be a >>>>>>>>>>>>> >>>>>>> good structuring mechanism to allow computational >>>>>>>>>>>>> mathematics >>>>>>>>>>>>> >>>>>>> to take advantage of multiple processors in a >>>>>>>>>>>>> disciplined fashion. >>>>>>>>>>>>> >>>>>>> >>>>>>>>>>>>> >>>>>>> Axiom has a thousand domains and each could run on it= s >>>>>>>>>>>>> own core. >>>>>>>>>>>>> >>>>>>> >>>>>>>>>>>>> >>>>>>> In addition. notice that each domain is independent o= f >>>>>>>>>>>>> the others. >>>>>>>>>>>>> >>>>>>> So if we want to use BLAS Fortran code, it could just >>>>>>>>>>>>> be another >>>>>>>>>>>>> >>>>>>> service node. In fact, any "foreign function" could >>>>>>>>>>>>> transparently >>>>>>>>>>>>> >>>>>>> cooperate in a distributed Axiom. >>>>>>>>>>>>> >>>>>>> >>>>>>>>>>>>> >>>>>>> Another key feature is that proofs can be "by node". >>>>>>>>>>>>> >>>>>>> >>>>>>>>>>>>> >>>>>>> Tim >>>>>>>>>>>>> >>>>>>> >>>>>>>>>>>>> >>>>>>> >>>>>>>>>>>>> >>>>>>> >>>>>>>>>>>>> >>>>>>> >>>>>>>>>>>>> >>>>>>> On 6/5/21, Tim Daly wrote: >>>>>>>>>>>>> >>>>>>>> Axiom is based on first-class dependent types. >>>>>>>>>>>>> Deciding when >>>>>>>>>>>>> >>>>>>>> two types are equivalent may involve computation. Se= e >>>>>>>>>>>>> >>>>>>>> Christiansen, David Thrane "Checking Dependent Types >>>>>>>>>>>>> with >>>>>>>>>>>>> >>>>>>>> Normalization by Evaluation" (2019) >>>>>>>>>>>>> >>>>>>>> >>>>>>>>>>>>> >>>>>>>> This puts an interesting constraint on building >>>>>>>>>>>>> types. The >>>>>>>>>>>>> >>>>>>>> constructed types has to export a function to decide >>>>>>>>>>>>> if a >>>>>>>>>>>>> >>>>>>>> given type is "equivalent" to itself. >>>>>>>>>>>>> >>>>>>>> >>>>>>>>>>>>> >>>>>>>> The notion of "equivalence" might involve category >>>>>>>>>>>>> ideas >>>>>>>>>>>>> >>>>>>>> of natural transformation and univalence. Sigh. >>>>>>>>>>>>> >>>>>>>> >>>>>>>>>>>>> >>>>>>>> That's an interesting design point. >>>>>>>>>>>>> >>>>>>>> >>>>>>>>>>>>> >>>>>>>> Tim >>>>>>>>>>>>> >>>>>>>> >>>>>>>>>>>>> >>>>>>>> >>>>>>>>>>>>> >>>>>>>> On 5/5/21, Tim Daly wrote: >>>>>>>>>>>>> >>>>>>>>> It is interesting that programmer's eyes and >>>>>>>>>>>>> expectations adapt >>>>>>>>>>>>> >>>>>>>>> to the tools they use. For instance, I use emacs an= d >>>>>>>>>>>>> expect to >>>>>>>>>>>>> >>>>>>>>> work directly in files and multiple buffers. When I >>>>>>>>>>>>> try to use one >>>>>>>>>>>>> >>>>>>>>> of the many IDE tools I find they tend to "get in >>>>>>>>>>>>> the way". I >>>>>>>>>>>>> >>>>>>>>> already >>>>>>>>>>>>> >>>>>>>>> know or can quickly find whatever they try to tell >>>>>>>>>>>>> me. If you use >>>>>>>>>>>>> >>>>>>>>> an >>>>>>>>>>>>> >>>>>>>>> IDE you probably find emacs "too sparse" for >>>>>>>>>>>>> programming. >>>>>>>>>>>>> >>>>>>>>> >>>>>>>>>>>>> >>>>>>>>> Recently I've been working in a sparse programming >>>>>>>>>>>>> environment. >>>>>>>>>>>>> >>>>>>>>> I'm exploring the question of running a proof >>>>>>>>>>>>> checker in an FPGA. >>>>>>>>>>>>> >>>>>>>>> The FPGA development tools are painful at best and >>>>>>>>>>>>> not intuitive >>>>>>>>>>>>> >>>>>>>>> since you SEEM to be programming but you're actuall= y >>>>>>>>>>>>> describing >>>>>>>>>>>>> >>>>>>>>> hardware gates, connections, and timing. This is an >>>>>>>>>>>>> environment >>>>>>>>>>>>> >>>>>>>>> where everything happens all-at-once and >>>>>>>>>>>>> all-the-time (like the >>>>>>>>>>>>> >>>>>>>>> circuits in your computer). It is the "assembly >>>>>>>>>>>>> language of >>>>>>>>>>>>> >>>>>>>>> circuits". >>>>>>>>>>>>> >>>>>>>>> Naturally, my eyes have adapted to this rather raw >>>>>>>>>>>>> level. >>>>>>>>>>>>> >>>>>>>>> >>>>>>>>>>>>> >>>>>>>>> That said, I'm normally doing literate programming >>>>>>>>>>>>> all the time. >>>>>>>>>>>>> >>>>>>>>> My typical file is a document which is a mixture of >>>>>>>>>>>>> latex and >>>>>>>>>>>>> >>>>>>>>> lisp. >>>>>>>>>>>>> >>>>>>>>> It is something of a shock to return to that world. >>>>>>>>>>>>> It is clear >>>>>>>>>>>>> >>>>>>>>> why >>>>>>>>>>>>> >>>>>>>>> people who program in Python find lisp to be a "sea >>>>>>>>>>>>> of parens". >>>>>>>>>>>>> >>>>>>>>> Yet as a lisp programmer, I don't even see the >>>>>>>>>>>>> parens, just code. >>>>>>>>>>>>> >>>>>>>>> >>>>>>>>>>>>> >>>>>>>>> It takes a few minutes in a literate document to >>>>>>>>>>>>> adapt vision to >>>>>>>>>>>>> >>>>>>>>> see the latex / lisp combination as natural. The >>>>>>>>>>>>> latex markup, >>>>>>>>>>>>> >>>>>>>>> like the lisp parens, eventually just disappears. >>>>>>>>>>>>> What remains >>>>>>>>>>>>> >>>>>>>>> is just lisp and natural language text. >>>>>>>>>>>>> >>>>>>>>> >>>>>>>>>>>>> >>>>>>>>> This seems painful at first but eyes quickly adapt. >>>>>>>>>>>>> The upside >>>>>>>>>>>>> >>>>>>>>> is that there is always a "finished" document that >>>>>>>>>>>>> describes the >>>>>>>>>>>>> >>>>>>>>> state of the code. The overhead of writing a >>>>>>>>>>>>> paragraph to >>>>>>>>>>>>> >>>>>>>>> describe a new function or change a paragraph to >>>>>>>>>>>>> describe the >>>>>>>>>>>>> >>>>>>>>> changed function is very small. >>>>>>>>>>>>> >>>>>>>>> >>>>>>>>>>>>> >>>>>>>>> Using a Makefile I latex the document to generate a >>>>>>>>>>>>> current PDF >>>>>>>>>>>>> >>>>>>>>> and then I extract, load, and execute the code. Thi= s >>>>>>>>>>>>> loop catches >>>>>>>>>>>>> >>>>>>>>> errors in both the latex and the source code. >>>>>>>>>>>>> Keeping an open file >>>>>>>>>>>>> >>>>>>>>> in >>>>>>>>>>>>> >>>>>>>>> my pdf viewer shows all of the changes in the >>>>>>>>>>>>> document after every >>>>>>>>>>>>> >>>>>>>>> run of make. That way I can edit the book as easily >>>>>>>>>>>>> as the code. >>>>>>>>>>>>> >>>>>>>>> >>>>>>>>>>>>> >>>>>>>>> Ultimately I find that writing the book while >>>>>>>>>>>>> writing the code is >>>>>>>>>>>>> >>>>>>>>> more productive. I don't have to remember why I >>>>>>>>>>>>> wrote something >>>>>>>>>>>>> >>>>>>>>> since the explanation is already there. >>>>>>>>>>>>> >>>>>>>>> >>>>>>>>>>>>> >>>>>>>>> We all have our own way of programming and our own >>>>>>>>>>>>> tools. >>>>>>>>>>>>> >>>>>>>>> But I find literate programming to be a real advanc= e >>>>>>>>>>>>> over IDE >>>>>>>>>>>>> >>>>>>>>> style programming and "raw code" programming. >>>>>>>>>>>>> >>>>>>>>> >>>>>>>>>>>>> >>>>>>>>> Tim >>>>>>>>>>>>> >>>>>>>>> >>>>>>>>>>>>> >>>>>>>>> >>>>>>>>>>>>> >>>>>>>>> On 2/27/21, Tim Daly wrote: >>>>>>>>>>>>> >>>>>>>>>> The systems I use have the interesting property of >>>>>>>>>>>>> >>>>>>>>>> "Living within the compiler". >>>>>>>>>>>>> >>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>> Lisp, Forth, Emacs, and other systems that present >>>>>>>>>>>>> themselves >>>>>>>>>>>>> >>>>>>>>>> through the Read-Eval-Print-Loop (REPL) allow the >>>>>>>>>>>>> >>>>>>>>>> ability to deeply interact with the system, shapin= g >>>>>>>>>>>>> it to your >>>>>>>>>>>>> >>>>>>>>>> need. >>>>>>>>>>>>> >>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>> My current thread of study is software >>>>>>>>>>>>> architecture. See >>>>>>>>>>>>> >>>>>>>>>> >>>>>>>>>>>>> https://www.youtube.com/watch?v=3DW2hagw1VhhI&feature=3Dyoutu= .be >>>>>>>>>>>>> >>>>>>>>>> and https://www.georgefairbanks.com/videos/ >>>>>>>>>>>>> >>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>> My current thinking on SANE involves the ability t= o >>>>>>>>>>>>> >>>>>>>>>> dynamically define categories, representations, an= d >>>>>>>>>>>>> functions >>>>>>>>>>>>> >>>>>>>>>> along with "composition functions" that permits >>>>>>>>>>>>> choosing a >>>>>>>>>>>>> >>>>>>>>>> combination at the time of use. >>>>>>>>>>>>> >>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>> You might want a domain for handling polynomials. >>>>>>>>>>>>> There are >>>>>>>>>>>>> >>>>>>>>>> a lot of choices, depending on your use case. You >>>>>>>>>>>>> might want >>>>>>>>>>>>> >>>>>>>>>> different representations. For example, you might >>>>>>>>>>>>> want dense, >>>>>>>>>>>>> >>>>>>>>>> sparse, recursive, or "machine compatible fixnums" >>>>>>>>>>>>> (e.g. to >>>>>>>>>>>>> >>>>>>>>>> interface with C code). If these don't exist it >>>>>>>>>>>>> ought to be >>>>>>>>>>>>> >>>>>>>>>> possible >>>>>>>>>>>>> >>>>>>>>>> to create them. Such "lego-like" building blocks >>>>>>>>>>>>> require careful >>>>>>>>>>>>> >>>>>>>>>> thought about creating "fully factored" objects. >>>>>>>>>>>>> >>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>> Given that goal, the traditional barrier of >>>>>>>>>>>>> "compiler" vs >>>>>>>>>>>>> >>>>>>>>>> "interpreter" >>>>>>>>>>>>> >>>>>>>>>> does not seem useful. It is better to "live within >>>>>>>>>>>>> the compiler" >>>>>>>>>>>>> >>>>>>>>>> which >>>>>>>>>>>>> >>>>>>>>>> gives the ability to define new things "on the fly= ". >>>>>>>>>>>>> >>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>> Of course, the SANE compiler is going to want an >>>>>>>>>>>>> associated >>>>>>>>>>>>> >>>>>>>>>> proof of the functions you create along with the >>>>>>>>>>>>> other parts >>>>>>>>>>>>> >>>>>>>>>> such as its category hierarchy and representation >>>>>>>>>>>>> properties. >>>>>>>>>>>>> >>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>> There is no such thing as a simple job. :-) >>>>>>>>>>>>> >>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>> Tim >>>>>>>>>>>>> >>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>> On 2/18/21, Tim Daly wrote: >>>>>>>>>>>>> >>>>>>>>>>> The Axiom SANE compiler / interpreter has a few >>>>>>>>>>>>> design points. >>>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>> 1) It needs to mix interpreted and compiled code >>>>>>>>>>>>> in the same >>>>>>>>>>>>> >>>>>>>>>>> function. >>>>>>>>>>>>> >>>>>>>>>>> SANE allows dynamic construction of code as well >>>>>>>>>>>>> as dynamic type >>>>>>>>>>>>> >>>>>>>>>>> construction at runtime. Both of these can occur >>>>>>>>>>>>> in a runtime >>>>>>>>>>>>> >>>>>>>>>>> object. >>>>>>>>>>>>> >>>>>>>>>>> So there is potentially a mixture of interpreted >>>>>>>>>>>>> and compiled >>>>>>>>>>>>> >>>>>>>>>>> code. >>>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>> 2) It needs to perform type resolution at compile >>>>>>>>>>>>> time without >>>>>>>>>>>>> >>>>>>>>>>> overhead >>>>>>>>>>>>> >>>>>>>>>>> where possible. Since this is not always possible >>>>>>>>>>>>> there needs to >>>>>>>>>>>>> >>>>>>>>>>> be >>>>>>>>>>>>> >>>>>>>>>>> a "prefix thunk" that will perform the resolution= . >>>>>>>>>>>>> Trivially, >>>>>>>>>>>>> >>>>>>>>>>> for >>>>>>>>>>>>> >>>>>>>>>>> example, >>>>>>>>>>>>> >>>>>>>>>>> if we have a + function we need to type-resolve >>>>>>>>>>>>> the arguments. >>>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>> However, if we can prove at compile time that the >>>>>>>>>>>>> types are both >>>>>>>>>>>>> >>>>>>>>>>> bounded-NNI and the result is bounded-NNI (i.e. >>>>>>>>>>>>> fixnum in lisp) >>>>>>>>>>>>> >>>>>>>>>>> then we can inline a call to + at runtime. If not= , >>>>>>>>>>>>> we might have >>>>>>>>>>>>> >>>>>>>>>>> + applied to NNI and POLY(FLOAT), which requires = a >>>>>>>>>>>>> thunk to >>>>>>>>>>>>> >>>>>>>>>>> resolve types. The thunk could even "specialize >>>>>>>>>>>>> and compile" >>>>>>>>>>>>> >>>>>>>>>>> the code before executing it. >>>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>> It turns out that the Forth implementation of >>>>>>>>>>>>> >>>>>>>>>>> "threaded-interpreted" >>>>>>>>>>>>> >>>>>>>>>>> languages model provides an efficient and >>>>>>>>>>>>> effective way to do >>>>>>>>>>>>> >>>>>>>>>>> this.[0] >>>>>>>>>>>>> >>>>>>>>>>> Type resolution can be "inserted" in intermediate >>>>>>>>>>>>> thunks. >>>>>>>>>>>>> >>>>>>>>>>> The model also supports dynamic overloading and >>>>>>>>>>>>> tail recursion. >>>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>> Combining high-level CLOS code with low-level >>>>>>>>>>>>> threading gives an >>>>>>>>>>>>> >>>>>>>>>>> easy to understand and robust design. >>>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>> Tim >>>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>> [0] Loeliger, R.G. "Threaded Interpretive >>>>>>>>>>>>> Languages" (1981) >>>>>>>>>>>>> >>>>>>>>>>> ISBN 0-07-038360-X >>>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>> On 2/5/21, Tim Daly wrote: >>>>>>>>>>>>> >>>>>>>>>>>> I've worked hard to make Axiom depend on almost >>>>>>>>>>>>> no other >>>>>>>>>>>>> >>>>>>>>>>>> tools so that it would not get caught by "code >>>>>>>>>>>>> rot" of >>>>>>>>>>>>> >>>>>>>>>>>> libraries. >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>> However, I'm also trying to make the new SANE >>>>>>>>>>>>> version much >>>>>>>>>>>>> >>>>>>>>>>>> easier to understand and debug.To that end I've >>>>>>>>>>>>> been >>>>>>>>>>>>> >>>>>>>>>>>> experimenting >>>>>>>>>>>>> >>>>>>>>>>>> with some ideas. >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>> It should be possible to view source code, of >>>>>>>>>>>>> course. But the >>>>>>>>>>>>> >>>>>>>>>>>> source >>>>>>>>>>>>> >>>>>>>>>>>> code is not the only, nor possibly the best, >>>>>>>>>>>>> representation of >>>>>>>>>>>>> >>>>>>>>>>>> the >>>>>>>>>>>>> >>>>>>>>>>>> ideas. >>>>>>>>>>>>> >>>>>>>>>>>> In particular, source code gets compiled into >>>>>>>>>>>>> data structures. >>>>>>>>>>>>> >>>>>>>>>>>> In >>>>>>>>>>>>> >>>>>>>>>>>> Axiom >>>>>>>>>>>>> >>>>>>>>>>>> these data structures really are a graph of >>>>>>>>>>>>> related structures. >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>> For example, looking at the gcd function from >>>>>>>>>>>>> NNI, there is the >>>>>>>>>>>>> >>>>>>>>>>>> representation of the gcd function itself. But >>>>>>>>>>>>> there is also a >>>>>>>>>>>>> >>>>>>>>>>>> structure >>>>>>>>>>>>> >>>>>>>>>>>> that is the REP (and, in the new system, is >>>>>>>>>>>>> separate from the >>>>>>>>>>>>> >>>>>>>>>>>> domain). >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>> Further, there are associated specification and >>>>>>>>>>>>> proof >>>>>>>>>>>>> >>>>>>>>>>>> structures. >>>>>>>>>>>>> >>>>>>>>>>>> Even >>>>>>>>>>>>> >>>>>>>>>>>> further, the domain inherits the category >>>>>>>>>>>>> structures, and from >>>>>>>>>>>>> >>>>>>>>>>>> those >>>>>>>>>>>>> >>>>>>>>>>>> it >>>>>>>>>>>>> >>>>>>>>>>>> inherits logical axioms and definitions through >>>>>>>>>>>>> the proof >>>>>>>>>>>>> >>>>>>>>>>>> structure. >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>> Clearly the gcd function is a node in a much >>>>>>>>>>>>> larger graph >>>>>>>>>>>>> >>>>>>>>>>>> structure. >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>> When trying to decide why code won't compile it >>>>>>>>>>>>> would be useful >>>>>>>>>>>>> >>>>>>>>>>>> to >>>>>>>>>>>>> >>>>>>>>>>>> be able to see and walk these structures. I've >>>>>>>>>>>>> thought about >>>>>>>>>>>>> >>>>>>>>>>>> using >>>>>>>>>>>>> >>>>>>>>>>>> the >>>>>>>>>>>>> >>>>>>>>>>>> browser but browsers are too weak. Either >>>>>>>>>>>>> everything has to be >>>>>>>>>>>>> >>>>>>>>>>>> "in >>>>>>>>>>>>> >>>>>>>>>>>> a >>>>>>>>>>>>> >>>>>>>>>>>> single tab to show the graph" or "the nodes of >>>>>>>>>>>>> the graph are in >>>>>>>>>>>>> >>>>>>>>>>>> different >>>>>>>>>>>>> >>>>>>>>>>>> tabs". Plus, constructing dynamic graphs that >>>>>>>>>>>>> change as the >>>>>>>>>>>>> >>>>>>>>>>>> software >>>>>>>>>>>>> >>>>>>>>>>>> changes (e.g. by loading a new spad file or >>>>>>>>>>>>> creating a new >>>>>>>>>>>>> >>>>>>>>>>>> function) >>>>>>>>>>>>> >>>>>>>>>>>> represents the huge problem of keeping the >>>>>>>>>>>>> browser "in sync >>>>>>>>>>>>> >>>>>>>>>>>> with >>>>>>>>>>>>> >>>>>>>>>>>> the >>>>>>>>>>>>> >>>>>>>>>>>> Axiom workspace". So something more dynamic and >>>>>>>>>>>>> embedded is >>>>>>>>>>>>> >>>>>>>>>>>> needed. >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>> Axiom source gets compiled into CLOS data >>>>>>>>>>>>> structures. Each of >>>>>>>>>>>>> >>>>>>>>>>>> these >>>>>>>>>>>>> >>>>>>>>>>>> new SANE structures has an associated surface >>>>>>>>>>>>> representation, >>>>>>>>>>>>> >>>>>>>>>>>> so >>>>>>>>>>>>> >>>>>>>>>>>> they >>>>>>>>>>>>> >>>>>>>>>>>> can be presented in user-friendly form. >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>> Also, since Axiom is literate software, it shoul= d >>>>>>>>>>>>> be possible >>>>>>>>>>>>> >>>>>>>>>>>> to >>>>>>>>>>>>> >>>>>>>>>>>> look >>>>>>>>>>>>> >>>>>>>>>>>> at >>>>>>>>>>>>> >>>>>>>>>>>> the code in its literate form with the >>>>>>>>>>>>> surrounding explanation. >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>> Essentially we'd like to have the ability to >>>>>>>>>>>>> "deep dive" into >>>>>>>>>>>>> >>>>>>>>>>>> the >>>>>>>>>>>>> >>>>>>>>>>>> Axiom >>>>>>>>>>>>> >>>>>>>>>>>> workspace, not only for debugging, but also for >>>>>>>>>>>>> understanding >>>>>>>>>>>>> >>>>>>>>>>>> what >>>>>>>>>>>>> >>>>>>>>>>>> functions are used, where they come from, what >>>>>>>>>>>>> they inherit, >>>>>>>>>>>>> >>>>>>>>>>>> and >>>>>>>>>>>>> >>>>>>>>>>>> how they are used in a computation. >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>> To that end I'm looking at using McClim, a lisp >>>>>>>>>>>>> windowing >>>>>>>>>>>>> >>>>>>>>>>>> system. >>>>>>>>>>>>> >>>>>>>>>>>> Since the McClim windows would be part of the >>>>>>>>>>>>> lisp image, they >>>>>>>>>>>>> >>>>>>>>>>>> have >>>>>>>>>>>>> >>>>>>>>>>>> access to display (and modify) the Axiom >>>>>>>>>>>>> workspace at all >>>>>>>>>>>>> >>>>>>>>>>>> times. >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>> The only hesitation is that McClim uses quicklis= p >>>>>>>>>>>>> and drags in >>>>>>>>>>>>> >>>>>>>>>>>> a >>>>>>>>>>>>> >>>>>>>>>>>> lot >>>>>>>>>>>>> >>>>>>>>>>>> of other subsystems. It's all lisp, of course. >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>> These ideas aren't new. They were available on >>>>>>>>>>>>> Symbolics >>>>>>>>>>>>> >>>>>>>>>>>> machines, >>>>>>>>>>>>> >>>>>>>>>>>> a truly productive platform and one I sorely mis= s. >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>> Tim >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>> On 1/19/21, Tim Daly wrote: >>>>>>>>>>>>> >>>>>>>>>>>>> Also of interest is the talk >>>>>>>>>>>>> >>>>>>>>>>>>> "The Unreasonable Effectiveness of Dynamic >>>>>>>>>>>>> Typing for >>>>>>>>>>>>> >>>>>>>>>>>>> Practical >>>>>>>>>>>>> >>>>>>>>>>>>> Programs" >>>>>>>>>>>>> >>>>>>>>>>>>> https://vimeo.com/74354480 >>>>>>>>>>>>> >>>>>>>>>>>>> which questions whether static typing really ha= s >>>>>>>>>>>>> any benefit. >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> Tim >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> On 1/19/21, Tim Daly wrote= : >>>>>>>>>>>>> >>>>>>>>>>>>>> Peter Naur wrote an article of interest: >>>>>>>>>>>>> >>>>>>>>>>>>>> http://pages.cs.wisc.edu/~remzi/Naur.pdf >>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>>> In particular, it mirrors my notion that Axiom >>>>>>>>>>>>> needs >>>>>>>>>>>>> >>>>>>>>>>>>>> to embrace literate programming so that the >>>>>>>>>>>>> "theory >>>>>>>>>>>>> >>>>>>>>>>>>>> of the problem" is presented as well as the >>>>>>>>>>>>> "theory >>>>>>>>>>>>> >>>>>>>>>>>>>> of the solution". I quote the introduction: >>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>>> This article is, to my mind, the most accurate >>>>>>>>>>>>> account >>>>>>>>>>>>> >>>>>>>>>>>>>> of what goes on in designing and coding a >>>>>>>>>>>>> program. >>>>>>>>>>>>> >>>>>>>>>>>>>> I refer to it regularly when discussing how mu= ch >>>>>>>>>>>>> >>>>>>>>>>>>>> documentation to create, how to pass along tac= it >>>>>>>>>>>>> >>>>>>>>>>>>>> knowledge, and the value of the XP's >>>>>>>>>>>>> metaphor-setting >>>>>>>>>>>>> >>>>>>>>>>>>>> exercise. It also provides a way to examine a >>>>>>>>>>>>> methodolgy's >>>>>>>>>>>>> >>>>>>>>>>>>>> economic structure. >>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>>> In the article, which follows, note that the >>>>>>>>>>>>> quality of the >>>>>>>>>>>>> >>>>>>>>>>>>>> designing programmer's work is related to the >>>>>>>>>>>>> quality of >>>>>>>>>>>>> >>>>>>>>>>>>>> the match between his theory of the problem an= d >>>>>>>>>>>>> his theory >>>>>>>>>>>>> >>>>>>>>>>>>>> of the solution. Note that the quality of a >>>>>>>>>>>>> later >>>>>>>>>>>>> >>>>>>>>>>>>>> programmer's >>>>>>>>>>>>> >>>>>>>>>>>>>> work is related to the match between his >>>>>>>>>>>>> theories and the >>>>>>>>>>>>> >>>>>>>>>>>>>> previous programmer's theories. >>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>>> Using Naur's ideas, the designer's job is not >>>>>>>>>>>>> to pass along >>>>>>>>>>>>> >>>>>>>>>>>>>> "the design" but to pass along "the theories" >>>>>>>>>>>>> driving the >>>>>>>>>>>>> >>>>>>>>>>>>>> design. >>>>>>>>>>>>> >>>>>>>>>>>>>> The latter goal is more useful and more >>>>>>>>>>>>> appropriate. It also >>>>>>>>>>>>> >>>>>>>>>>>>>> highlights that knowledge of the theory is >>>>>>>>>>>>> tacit in the >>>>>>>>>>>>> >>>>>>>>>>>>>> owning, >>>>>>>>>>>>> >>>>>>>>>>>>>> and >>>>>>>>>>>>> >>>>>>>>>>>>>> so passing along the thoery requires passing >>>>>>>>>>>>> along both >>>>>>>>>>>>> >>>>>>>>>>>>>> explicit >>>>>>>>>>>>> >>>>>>>>>>>>>> and tacit knowledge. >>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>>> Tim >>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>> >>>>>>>>>>>>> >>>>>>>> >>>>>>>>>>>>> >>>>>>> >>>>>>>>>>>>> >>>>>> >>>>>>>>>>>>> >>>>> >>>>>>>>>>>>> >>>> >>>>>>>>>>>>> >>> >>>>>>>>>>>>> >> >>>>>>>>>>>>> > >>>>>>>>>>>>> >>>>>>>>>>>> --000000000000cf249d05da14f883 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
Axiom has an awkward 'attributes' category st= ructure.

In the SANE version it is clear that thes= e attributes are much
closer to logic 'definitions'. As a= result one of the changes
is to create a new 'category'-= type structure for definitions.
There will be a new keyword, like= the category keyword,
'definition'.

=
Tim


On Fri, Mar 11, 2022 at 9:46 AM Tim Daly <= axiomcas@gmail.com> wrote:
=
The github lockout continues...

I'm sp= ending some time adding examples to source code.

A= ny function can have ++X comments added. These will
appear as exa= mples when the function is )display For example,
in PermutationGr= oup there is a function 'strongGenerators'
defined as:

=C2=A0 strongGenerators : % -> L PERM S
=C2=A0=C2=A0=C2=A0 ++ strongGenerators(gp) returns strong generators for
=C2=A0=C2=A0=C2=A0 ++ the group gp.
=C2=A0=C2=A0=C2=A0 += +
=C2=A0=C2=A0=C2=A0 ++X S:List(Integer) :=3D [1,2,3,4]
=C2=A0=C2=A0=C2=A0 ++X G :=3D symmetricGroup(S)
=C2=A0=C2=A0=C2= =A0 ++X strongGenerators(G)



Later, in the interpreter we see:




)d op strongGenerators

<= /div>
=C2=A0 There is one exposed function called strongGenerators :
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 [1] PermutationGroup(D2) -> List(= Permutation(D2)) from
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 PermutationGroup(D2)
= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0 if D2 has SETCAT

=C2=A0 Exam= ples of strongGenerators from PermutationGroup
=C2=A0
<= div>=C2=A0 S:List(Integer) :=3D [1,2,3,4]
=C2=A0 G :=3D symmetric= Group(S)
=C2=A0 strongGenerators(G)


=


This will show a working example f= or functions that the
user can copy and use. It is especially use= ful to show how
to construct working arguments.

These "example" functions are run at build time when
the make command looks like
=C2=A0=C2=A0=C2=A0 make TESTSET= =3Dalltests

I hope to add this documentation t= o all Axiom functions.

In addition, the plan is to= add these function calls to the
usual test documentation. That m= eans that all of these examples
will be run and show their output= in the final distribution
(mnt/ubuntu/doc/src/input/*.dvi files)= so the user can view
the expected output.

Tim




On Fri, Feb 25= , 2022 at 6:05 PM Tim Daly <axiomcas@gmail.com> wrote:
It turns out that creat= ing SPAD-looking output is trivial
in Common Lisp. Each clas= s can have a custom print
routine so signatures and ++ comments c= an each be
printed with their own format.

To ensure that I maintain compatibility I'll be printing
th= e categories and domains so they look like SPAD code,
at least un= til I get the proof technology integrated. I will
probably specia= lize the proof printers to look like the
original LEAN proof synt= ax.

Internally, however, it will all be Common Lis= p.

Common Lisp makes so many desirable features so= easy.
It is possible to trace dynamically at any level. One coul= d
even write a trace that showed how Axiom arrived at the
solution. Any domain could have special case output syntax
wit= hout affecting any other domain so one could write a
tree-like ou= tput for proofs. Using greek characters is trivial
so the input a= nd output notation is more mathematical.

Tim


On Thu, Feb 24, 2022 at 10:24 AM Tim Daly <axiomcas@gmail.com> wr= ote:
Axiom's SPAD code compiles to Common Lisp.
The AKC= L version of Common Lisp compiles to C.
Three languages and 2 com= pilers is a lot to maintain.
Further, there are very few people a= ble to write SPAD
and even fewer people able to maintain it.
<= /div>

I've decided that the SANE version of Axiom wi= ll be
implemented in pure Common Lisp. I've outlined Axi= om's
category / type hierarchy in the Common Lisp Object<= /div>
System (CLOS). I am now experimenting with re-writing
t= he functions into Common Lisp.

This will have = several long-term effects. It simplifies
the implementation issue= s. SPAD code blocks a lot of
actions and optimizations that Commo= n Lisp provides.
The Common Lisp language has many more people
who can read, modify, and maintain code. It provides for
<= div>interoperability with other Common Lisp projects with
no effo= rt. Common Lisp is an international standard
which ensures that t= he code will continue to run.

The input / outp= ut mathematics will remain the same.
Indeed, with the new general= izations for first-class
dependent types it will be more general.=

This is a big change, similar to eliminating BOOT= code
and moving to Literate Programming. This will provide a
better platform for future research work. Current research
is focused on merging Axiom's computer algebra mathematics
w= ith Lean's proof language. The goal is to create a system for
=C2=A0"computational mathematics".

= Research is the whole point of Axiom.

Tim



On Sat, Jan 22, 2022 at 9:16 PM Tim Daly <axiomcas@gmail.com= > wrote:
I can't stress enough how important it is to listen t= o Hamming's talk

Axiom will begin to die the day I stop wo= rking on it.

However, proving Axiom correct "= down to the metal", is fundamental.
It will merge computer a= lgebra and logic, spawning years of new
research.

<= /div>
Work on fundamental problems.

Tim
<= div>

On Thu, Dec 30, 2021 at 6:46 PM Tim Daly <axiomcas@gmail.com> wrote:=
One of the interesting questions when obtaining a result
i= s "what functions were called and what was their return value?"
Otherwise known as the "show your work" idea.
=
There is an idea called the "writer monad" [0], us= ually
implemented to facilitate logging. We can exploit this=
idea to provide "show your work" capability. Each func= tion
can provide this information inside the monad enabling the
question to be answered at any time.

For = those unfamiliar with the monad idea, the best explanation
I'= ve found is this video [1].

Tim

=
[0] Deriving the writer monad from first principles
https://williamyaoh.com/posts/2020-07-26-deriving-= writer-monad.html

[1] The Absolute Best Intro = to Monads for Software Engineers

On Mon, Dec 13, 2021 at 12:30 AM Tim Daly <axiomcas@gmail.com&g= t; wrote:
...(snip)...

Common Lisp has a= n "open compiler". That allows the ability
to deeply mo= dify compiler behavior using compiler macros
and macros in genera= l. CLOS takes advantage of this to add
typed behavior into the co= mpiler in a way that ALLOWS strict
typing such as found in constr= uctive type theory and ML.
Judgments, ala Crary, are front-and-ce= nter.

Whether you USE the discipline afforded = is the real question.

Indeed, the Axiom research s= truggle is essentially one of how
to have a disciplined use of fi= rst-class dependent types. The
struggle raises issues of, for exa= mple, compiling a dependent
type whose argument is recursive in t= he compiled type. Since
the new type is first-class it can be con= structed at what you
improperly call "run-time". Howeve= r, it appears that the recursive
type may have to call the compil= er at each recursion to generate
the next step since in some case= s it cannot generate "closed code".

= I am embedding proofs (in LEAN language) into the type
hierarchy = so that theorems, which depend on the type hierarchy,
are cor= rectly inherited. The compiler has to check the proofs of functions
at compile time using these. Hacking up nonsense just won't cut it. = Think
of the problem of embedding LEAN proofs in ML or ML in LEAN= .
(Actually, Jeremy Avigad might find that research interesting.)=

So Matrix(3,3,Float) has inverses (assuming Float= is a
field (cough)). The type inherits this theorem and proofs o= f
functions can use this. But Matrix(3,4,Integer) does not have
inverses so the proofs cannot use this. The type hierarchy has
to ensure that the proper theorems get inherited.

=
Making proof technology work at compile time is hard.
= (Worse yet, LEAN is a moving target. Sigh.)



On = Thu, Nov 25, 2021 at 9:43 AM Tim Daly <axiomcas@gmail.com> wrote:
=

As you know I've been re-architect= ing Axiom to use first class
dependent types and proving the algo= rithms correct. For example,
the GCD of natural numbers or the GC= D of polynomials.

The idea involves "boxing u= p" the proof with the algorithm (aka
proof carrying code) in= the ELF file (under a crypto hash so it
can't be changed).

Once the code is running on the CPU, the proof is r= un in parallel
on the field programmable gate array (FPGA). Intel= data center
servers have CPUs with built-in FPGAs these days.

There is a bit of a disconnect, though. The GCD code= is compiled
machine code but the proof is LEAN-level.
=
What would be ideal is if the compiler not only compiled the= GCD
code to machine code, it also compiled the proof to "ma= chine code".
That is, for each machine instruction, the FPGA= proof checker
would ensure that the proof was not violated at th= e individual
instruction level.

What= does it mean to "compile a proof to the machine code level"?

The Milawa effort (Myre14.pdf) does incremental proof= s in layers.
To quote from the article [0]:

=C2=A0=C2=A0 We begin with a simple proof checker, call it A, which= is short
=C2=A0=C2=A0 enough to verify by the ``social process&#= 39;' of mathematics -- and
=C2=A0 more recently with a theore= m prover for a more expressive logic.

=C2=A0=C2=A0= We then develop a series of increasingly powerful proof checkers,
=C2=A0 call the B, C, D, and so on. We show each of these programs only
=C2=A0=C2=A0 accepts the same formulas as A, using A to verify B, = and B to verify
=C2=A0=C2=A0 C, and so on. Then, since we trust A= , and A says B is trustworthy, we
=C2=A0=C2=A0 can trust B. Then,= since we trust B, and B says C is trustworthy, we
=C2=A0=C2=A0 c= an trust C.

This gives a technique for "= compiling the proof" down the the machine
code level. Ideall= y, the compiler would have judgments for each step of
the compila= tion so that each compile step has a justification. I don't
k= now of any compiler that does this yet. (References welcome).

At the machine code level, there are techniques that w= ould allow
the FPGA proof to "step in sequence" with th= e executing code.
Some work has been done on using "Hoar= e Logic for Realistically
Modelled Machine Code" (paper atta= ched, Myre07a.pdf),
"Decompilation into Logic -- Improved (M= yre12a.pdf).

So the game is to construct a GCD ove= r some type (Nats, Polys, etc.
Axiom has 22), compile the depende= nt type GCD to machine code.
In parallel, the proof of the code i= s compiled to machine code. The
pair is sent to the CPU/FPGA and,= while the algorithm runs, the FPGA
ensures the proof is not viol= ated, instruction by instruction.

(I'm ignorin= g machine architecture issues such pipelining, out-of-order,
bran= ch prediction, and other machine-level things to ponder. I'm looking
at the RISC-V Verilog details by various people to understand bette= r but
it is still a "misty fog" for me.)
=

On Thu, Nov 25, 2021 at 6:05 AM Tim Daly &l= t;axiomcas@gmail.co= m> wrote:

As you know I've been re-ar= chitecting Axiom to use first class
dependent types and proving t= he algorithms correct. For example,
the GCD of natural numbers or= the GCD of polynomials.

The idea involves "b= oxing up" the proof with the algorithm (aka
proof carrying c= ode) in the ELF file (under a crypto hash so it
can't be chan= ged).

Once the code is running on the CPU, the pro= of is run in parallel
on the field programmable gate array (FPGA)= . Intel data center
servers have CPUs with built-in FPGAs these d= ays.

There is a bit of a disconnect, though. The G= CD code is compiled
machine code but the proof is LEAN-level.

What would be ideal is if the compiler not only compi= led the GCD
code to machine code, it also compiled the proof to &= quot;machine code".
That is, for each machine instruction, t= he FPGA proof checker
would ensure that the proof was not violate= d at the individual
instruction level.

What does it mean to "compile a proof to the machine code level&quo= t;?

The Milawa effort (Myre14.pdf) does incrementa= l proofs in layers.
To quote from the article [0]:
=
=C2=A0=C2=A0 We begin with a simple proof checker, call it A= , which is short
=C2=A0=C2=A0 enough to verify by the ``social pr= ocess'' of mathematics -- and
=C2=A0 more recently with a= theorem prover for a more expressive logic.

=C2= =A0=C2=A0 We then develop a series of increasingly powerful proof checkers,=
=C2=A0 call the B, C, D, and so on. We show each of these progra= ms only
=C2=A0=C2=A0 accepts the same formulas as A, using A to v= erify B, and B to verify
=C2=A0=C2=A0 C, and so on. Then, since w= e trust A, and A says B is trustworthy, we
=C2=A0=C2=A0 can trust= B. Then, since we trust B, and B says C is trustworthy, we
=C2= =A0=C2=A0 can trust C.

This gives a technique= for "compiling the proof" down the the machine
code le= vel. Ideally, the compiler would have judgments for each step of
= the compilation so that each compile step has a justification. I don't<= /div>
know of any compiler that does this yet. (References welcome).

At the machine code level, there are techni= ques that would allow
the FPGA proof to "step in sequence&qu= ot; with the executing code.
Some work has been done on using= "Hoare Logic for Realistically
Modelled Machine Code" = (paper attached, Myre07a.pdf),
"Decompilation into Logic -- = Improved (Myre12a.pdf).

So the game is to construc= t a GCD over some type (Nats, Polys, etc.
Axiom has 22), compile = the dependent type GCD to machine code.
In parallel, the proof of= the code is compiled to machine code. The
pair is sent to the CP= U/FPGA and, while the algorithm runs, the FPGA
ensures the proof = is not violated, instruction by instruction.

(I= 9;m ignoring machine architecture issues such pipelining, out-of-order,
branch prediction, and other machine-level things to ponder. I'm= looking
at the RISC-V Verilog details by various people to under= stand better but
it is still a "misty fog" for me.)
=

The result is proven code "down to the metal= ".

Tim




<= div dir=3D"ltr" class=3D"gmail_attr">On Sat, Nov 13, 2021 at 5:28 PM Tim Da= ly <axiomcas@gma= il.com> wrote:
Full support for general, first-class dependent= types requires
some changes to the Axiom design. That implies so= me language
design questions.

Given that= mathematics is such a general subject with a lot of
"local&= quot; notation and ideas (witness logical judgment notation)
care= ful thought is needed to design a language that is able to
handle= a wide range.

Normally language design is a two-l= evel process. The language
designer creates a language and then a= n implementation. Various
design choices affect the final languag= e.

There is "The Metaobject Protocol"= ; (MOP)
which= encourages a three-level process. The language designer
wor= ks at a Metalevel to design a family of languages, then the
langu= age specializations, then the implementation. A MOP design
allows= the language user to optimize the language to their problem.
A simple paper on the subject is "Metaobject Protocols&quo= t;
Tim


On Mon, Oct 25, 2021 at 7:42 PM Tim Dal= y <axiomcas@gmai= l.com> wrote:
I have a separate thread of research on Self-Rep= licating Systems
(ref: Kinematics of Self Reproducing Machines

which led to watching "Strange Dreams of Stranger Loops" = by Will Byrd
https://www.youtube.com/watch?v=3DAffW-7ika0E

Will referenced a PhD Thesis by Jon Doyle
"A Model for Deliberation, Action, and Introspection"
=
I also read the thesis by J.C.G. Sturdy
"A Li= sp through the Looking Glass"

Self-replicatio= n requires the ability to manipulate your own
representation in s= uch a way that changes to that representation
will change behavio= r.

This leads to two thoughts in the SANE research= .

First, "Declarative Representation". T= hat is, most of the things
about the representation should be dec= larative rather than
procedural. Applying this idea as much as po= ssible makes it
easier to understand and manipulate.

Second, "Explicit Call Stack". Function calls fo= rm an implicit
call stack. This can usually be displayed in a run= ning lisp system.
However, having the call stack explicitly avail= able would mean
that a system could "introspect" at the= first-class level.

These two ideas would make it = easy, for example, to let the
system "show the work". O= ne of the normal complaints is that
a system presents an answer b= ut there is no way to know how
that answer was derived. These two= ideas make it possible to
understand, display, and even post-ans= wer manipulate
the intermediate steps.

H= aving the intermediate steps also allows proofs to be
inserted in= a step-by-step fashion. This aids the effort to
have proofs run = in parallel with computation at the hardware
level.

Tim



=


On Thu, Oct 21, 2021 at 9:50 AM Tim Daly <axiomcas@gmail.com&= gt; wrote:
So the current struggle involves the categories in Axiom.<= /div>

The categories and domains constructed using categ= ories
are dependent types. When are dependent types "equal&q= uot;?
Well, hummmm, that depends on the arguments to the
constructor.

But in order to decide(?) equality = we have to evaluate
the arguments (which themselves can be depend= ent types).
Indeed, we may, and in general, we must evaluate the =
arguments at compile time (well, "construction time&quo= t; as
there isn't really a compiler / interpreter separation = anymore.)

That raises the question of what &qu= ot;equality" means. This
is not simply a "set equality&= quot; relation. It falls into the
infinite-groupoid of homotopy t= ype theory. In general
it appears that deciding category / domain= equivalence
might force us to climb the type hierarchy.

Beyond that, there is the question of "which proof&qu= ot;
applies to the resulting object. Proofs depend on their
=
assumptions which might be different for different
construct= ions. As yet I have no clue how to "index"
proofs based= on their assumptions, nor how to
connect these assumptions = to the groupoid structure.

My brain hurts.

Tim


On Mon, Oct 18, 2021 at 2:00 AM T= im Daly <axiomca= s@gmail.com> wrote:
"Birthing Computational Mathematics&q= uot;

The Axiom SANE project is difficult at a very= fundamental
level. The title "SANE" was chosen due to = the various
words found in a thesuarus... "rational", &= quot;coherent",
"judicious" and "sound".=

These are very high level, amorphous ideas. But s= o is
the design of SANE. Breaking away from tradition in
computer algebra, type theory, and proof assistants
is very dif= ficult. Ideas tend to fall into standard jargon
which limits both= the frame of thinking (e.g. dependent
types) and the content (e.= g. notation).

Questioning both frame and content i= s very difficult.
It is hard to even recognize when they are acce= pted
"by default" rather than "by choice". Wh= at does the idea
"power tools" mean in a primitive,= hand labor culture?

Christopher Alexander [0]= addresses this problem in
a lot of his writing. Specifically, in= his book "Notes on
the Synthesis of Form", in his chap= ter 5 "The Selfconsious
Process", he addresses this pro= blem directly. This is a
"must read" book.

Unlike building design and contruction, however, there
are almost no constraints to use as guides. Alexander
quot= es Plato's Phaedrus:

=C2=A0 "First, the t= aking in of scattered particulars under
=C2=A0=C2=A0 one Idea, so= that everyone understands what is being
=C2=A0=C2=A0 talked abou= t ... Second, the separation of the Idea
=C2=A0=C2=A0 into parts,= by dividing it at the joints, as nature
=C2=A0=C2=A0 directs, no= t breaking any limb in half as a bad
=C2=A0=C2=A0 carver mig= ht."

Lisp, which has been called "cl= ay for the mind" can
build virtually anything that can be th= ought. The
"joints" are also "of one's ch= oosing" so one is
both carver and "nature".

Clearly the problem is no longer "the tools&quo= t;.
*I* am the problem constraining the solution.
Birth= ing this "new thing" is slow, difficult, and
uncertain = at best.

Tim

[0] Alexande= r, Christopher "Notes on the Synthesis
of Form" Harvard= University Press 1964
ISBN 0-674-62751-2


On Sun, Oct 10, 2021 at 4:40 PM Tim Daly <axiomcas@gmail.com> wrote:
Re: writing a paper... I'= m not connected to Academia
so anything I'd write would never make it into print.

"Language level parsing" is still a long way off. The talk
by Guy Steele [2] highlights some of the problems we
currently face using mathematical metanotation.

For example, a professor I know at CCNY (City College
of New York) didn't understand Platzer's "funny
fraction notation" (proof judgements) despite being
an expert in Platzer's differential equations area.

Notation matters and is not widely common.

I spoke to Professor Black (in LTI) about using natural
language in the limited task of a human-robot cooperation
in changing a car tire.=C2=A0 I looked at the current machine
learning efforts. They are no where near anything but
toy systems, taking too long to train and are too fragile.

Instead I ended up using a combination of AIML [3]
(Artificial Intelligence Markup Language), the ALICE
Chatbot [4], Forgy's OPS5 rule based program [5],
and Fahlman's SCONE [6] knowledge base. It was
much less fragile in my limited domain problem.

I have no idea how to extend any system to deal with
even undergraduate mathematics parsing.

Nor do I have any idea how I would embed LEAN
knowledge into a SCONE database, although I
think the combination would be useful and interesting.

I do believe that, in the limited area of computational
mathematics, we are capable of building robust, proven
systems that are quite general and extensible. As you
might have guessed I've given it a lot of thought over
the years :-)

A mathematical language seems to need >6 components

1) We need some sort of a specification language, possibly
somewhat 'propositional' that introduces the assumptions
you mentioned (ref. your discussion of numbers being
abstract and ref. your discussion of relevant choice of
assumptions related to a problem).

This is starting to show up in the hardware area (e.g.
Lamport's TLC[0])

Of course, specifications relate to proving programs
and, as you recall, I got a cold reception from the
LEAN community about using LEAN for program proofs.

2) We need "scaffolding". That is, we need a theory
that can be reduced to some implementable form
that provides concept-level structure.

Axiom uses group theory for this. Axiom's "category"
structure has "Category" things like Ring. Claiming
to be a Ring brings in a lot of "Signatures" of functions
you have to implement to properly be a Ring.

Scaffolding provides a firm mathematical basis for
design. It provides a link between the concept of a
Ring and the expectations you can assume when
you claim your "Domain" "is a Ring". Category
theory might provide similar structural scaffolding
(eventually... I'm still working on that thought garden)

LEAN ought to have a textbook(s?) that structures
the world around some form of mathematics. It isn't
sufficient to say "undergraduate math" is the goal.
There needs to be some coherent organization so
people can bring ideas like Group Theory to the
organization. Which brings me to ...

3) We need "spreading". That is, we need to take
the various definitions and theorems in LEAN and
place them in their proper place in the scaffold.

For example, the Ring category needs the definitions
and theorems for a Ring included in the code for the
Ring category. Similarly, the Commutative category
needs the definitions and theorems that underlie
"commutative" included in the code.

That way, when you claim to be a "Commutative Ring"
you get both sets of definitions and theorems. That is,
the inheritance mechanism will collect up all of the
definitions and theorems and make them available
for proofs.

I am looking at LEAN's definitions and theorems with
an eye to "spreading" them into the group scaffold of
Axiom.

4) We need "carriers" (Axiom calls them representations,
aka "REP"). REPs allow data structures to be defined
independent of the implementation.

For example, Axiom can construct Polynomials that
have their coefficients in various forms of representation.
You can define "dense" (all coefficients in a list),
"sparse" (only non-zero coefficients), "recursive", etc= .

A "dense polynomial" and a "sparse polynomial" work
exactly the same way as far as the user is concerned.
They both implement the same set of functions. There
is only a difference of representation for efficiency and
this only affects the implementation of the functions,
not their use.

Axiom "got this wrong" because it didn't sufficiently
separate the REP from the "Domain". I plan to fix this.

LEAN ought to have a "data structures" subtree that
has all of the definitions and axioms for all of the
existing data structures (e.g. Red-Black trees). This
would be a good undergraduate project.

5) We need "Domains" (in Axiom speak). That is, we
need a box that holds all of the functions that implement
a "Domain". For example, a "Polynomial Domain" would hold all of the functions for manipulating polynomials
(e.g polynomial multiplication). The "Domain" box
is a dependent type that:

=C2=A0 A) has an argument list of "Categories" that this "Do= main"
=C2=A0 =C2=A0 =C2=A0 box inherits. Thus, the "Integer Domain" inh= erits
=C2=A0 =C2=A0 =C2=A0 the definitions and axioms from "Commutative"= ;

=C2=A0 =C2=A0 =C2=A0Functions in the "Domain" box can now assume<= br> =C2=A0 =C2=A0 =C2=A0and use the properties of being commutative. Proofs
=C2=A0 =C2=A0 =C2=A0of functions in this domain can use the definitions
=C2=A0 =C2=A0 =C2=A0and proofs about being commutative.

=C2=A0 B) contains an argument that specifies the "REP"
=C2=A0 =C2=A0 =C2=A0 =C2=A0(aka, the carrier). That way you get all of the<= br> =C2=A0 =C2=A0 =C2=A0 =C2=A0functions associated with the data structure
=C2=A0 =C2=A0 =C2=A0 available for use in the implementation.

=C2=A0 =C2=A0 =C2=A0 Functions in the Domain box can use all of
=C2=A0 =C2=A0 =C2=A0 the definitions and axioms about the representation =C2=A0 =C2=A0 =C2=A0 (e.g. NonNegativeIntegers are always positive)

=C2=A0 C) contains local "spread" definitions and axioms
=C2=A0 =C2=A0 =C2=A0 =C2=A0that can be used in function proofs.

=C2=A0 =C2=A0 =C2=A0 For example, a "Square Matrix" domain would<= br> =C2=A0 =C2=A0 =C2=A0 have local axioms that state that the matrix is
=C2=A0 =C2=A0 =C2=A0 always square. Thus, functions in that box could
=C2=A0 =C2=A0 =C2=A0 use these additional definitions and axioms in
=C2=A0 =C2=A0 =C2=A0 function proofs.

=C2=A0 D) contains local state. A "Square Matrix" domain
=C2=A0 =C2=A0 =C2=A0 =C2=A0would be constructed as a dependent type that =C2=A0 =C2=A0 =C2=A0 =C2=A0specified the size of the square (e.g. a 2x2
=C2=A0 =C2=A0 =C2=A0 =C2=A0matrix would have '2' as a dependent par= ameter.

=C2=A0 E) contains implementations of inherited functions.

=C2=A0 =C2=A0 =C2=A0 =C2=A0A "Category" could have a signature fo= r a GCD
=C2=A0 =C2=A0 =C2=A0 =C2=A0function and the "Category" could have= a default
=C2=A0 =C2=A0 =C2=A0 =C2=A0implementation. However, the "Domain" = could
=C2=A0 =C2=A0 =C2=A0 =C2=A0have a locally more efficient implementation whi= ch
=C2=A0 =C2=A0 =C2=A0 =C2=A0overrides the inherited implementation.

=C2=A0 =C2=A0 =C2=A0 Axiom has about 20 GCD implementations that
=C2=A0 =C2=A0 =C2=A0 differ locally from the default in the category. They<= br> =C2=A0 =C2=A0 =C2=A0 use properties known locally to be more efficient.

=C2=A0 F) contains local function signatures.

=C2=A0 =C2=A0 =C2=A0 A "Domain" gives the user more and more uniq= ue
=C2=A0 =C2=A0 =C2=A0 functions. The signature have associated
=C2=A0 =C2=A0 =C2=A0 "pre- and post- conditions" that can be used=
=C2=A0 =C2=A0 =C2=A0 as assumptions in the function proofs.

=C2=A0 =C2=A0 =C2=A0 Some of the user-available functions are only
=C2=A0 =C2=A0 =C2=A0 visible if the dependent type would allow them
=C2=A0 =C2=A0 =C2=A0 to exist. For example, a general Matrix domain
=C2=A0 =C2=A0 =C2=A0 would have fewer user functions that a Square
=C2=A0 =C2=A0 =C2=A0 Matrix domain.

=C2=A0 =C2=A0 =C2=A0 In addition, local "helper" functions need t= heir
=C2=A0 =C2=A0 =C2=A0 own signatures that are not user visible.

=C2=A0 G) the function implementation for each signature.

=C2=A0 =C2=A0 =C2=A0 =C2=A0This is obviously where all the magic happens
=C2=A0 H) the proof of each function.

=C2=A0 =C2=A0 =C2=A0 =C2=A0This is where I'm using LEAN.

=C2=A0 =C2=A0 =C2=A0 =C2=A0Every function has a proof. That proof can use =C2=A0 =C2=A0 =C2=A0 =C2=A0all of the definitions and axioms inherited from=
=C2=A0 =C2=A0 =C2=A0 =C2=A0the "Category", "Representation&q= uot;, the "Domain
=C2=A0 =C2=A0 =C2=A0 =C2=A0Local", and the signature pre- and post- =C2=A0 =C2=A0 =C2=A0 =C2=A0conditions.

=C2=A0 =C2=A0I) literature links. Algorithms must contain a link
=C2=A0 =C2=A0 =C2=A0 to at least one literature reference. Of course,
=C2=A0 =C2=A0 =C2=A0 since everything I do is a Literate Program
=C2=A0 =C2=A0 =C2=A0 this is obviously required. Knuth said so :-)


LEAN ought to have "books" or "pamphlets" that
bring together all of this information for a domain
such as Square Matrices. That way a user can
find all of the related ideas, available functions,
and their corresponding proofs in one place.

6) User level presentation.

=C2=A0 =C2=A0 This is where the systems can differ significantly.
=C2=A0 =C2=A0 Axiom and LEAN both have GCD but they use
=C2=A0 =C2=A0 that for different purposes.

=C2=A0 =C2=A0 I'm trying to connect LEAN's GCD and Axiom's GCD<= br> =C2=A0 =C2=A0 so there is a "computational mathematics" idea that=
=C2=A0 =C2=A0 allows the user to connect proofs and implementations.

7) Trust

Unlike everything else, computational mathematics
can have proven code that gives various guarantees.

I have been working on this aspect for a while.
I refer to it as trust "down to the metal" The idea is
that a proof of the GCD function and the implementation
of the GCD function get packaged into the ELF format.
(proof carrying code). When the GCD algorithm executes
on the CPU, the GCD proof is run through the LEAN
proof checker on an FPGA in parallel.

(I just recently got a PYNQ Xilinx board [1] with a CPU
and FPGA together. I'm trying to implement the LEAN
proof checker on the FPGA).

We are on the cusp of a revolution in computational
mathematics. But the two pillars (proof and computer
algebra) need to get know each other.

Tim



[0] Lamport, Leslie "Chapter on TLA+"
in "Software Specification Methods"
https://www.springer.com/gp/book/9781852333539
(I no longer have CMU library access or I'd send you
the book PDF)

[1] https://www.tul.com.tw/productspynq-z2.html

[2] https://www.youtube.com/watch?v=3DdCuZkaaou0Q
[3] "ARTIFICIAL INTELLIGENCE MARKUP LANGUAGE"
https://arxiv.org/pdf/1307.3091.pdf

[4] ALICE Chatbot
http://www.scielo.org.mx/pdf/cys= /v19n4/1405-5546-cys-19-04-00625.pdf

[5] OPS5 User Manual
https://kilthub.cm= u.edu/articles/journal_contribution/OPS5_user_s_manual/6608090/1

[6] Scott Fahlman "SCONE"
http://www.cs.cmu.edu/~sef/scone/

On 9/27/21, Tim Daly <axiomcas@gmail.com> wrote:
> I have tried to maintain a list of names of people who have
> helped Axiom, going all the way back to the pre-Scratchpad
> days. The names are listed at the beginning of each book.
> I also maintain a bibliography of publications I've read or
> that have had an indirect influence on Axiom.
>
> Credit is "the coin of the realm". It is easy to share and w= rong
> to ignore. It is especially damaging to those in Academia who
> are affected by credit and citations in publications.
>
> Apparently I'm not the only person who feels that way. The ACM
> Turing award seems to have ignored a lot of work:
>
> Scientific Integrity, the 2021 Turing Lecture, and the 2018 Turing
> Award for Deep Learning
> https://pe= ople.idsia.ch/~juergen/scientific-integrity-turing-award-deep-learning.html=
>
> I worked on an AI problem at IBM Research called Ketazolam.
> (https://en.wikipedia.org/wiki/Ketazolam). The idea = was to recognize
> and associated 3D chemical drawings with their drug counterparts.
> I used Rumelhart, and McClelland's books. These books contained > quite a few ideas that seem to be "new and innovative" among= the
> machine learning crowd... but the books are from 1987. I don't bel= ieve
> I've seen these books mentioned in any recent bibliography.
> https://mitpress.mit.edu= /books/parallel-distributed-processing-volume-1
>
>
>
>
> On 9/27/21, Tim Daly <axiomcas@gmail.com> wrote:
>> Greg Wilson asked "How Reliable is Scientific Software?"=
>> https://ne= verworkintheory.org/2021/09/25/how-reliable-is-scientific-software.html=
>>
>> which is a really interesting read. For example"
>>
>>=C2=A0 [Hatton1994], is now a quarter of a century old, but its con= clusions
>> are still fresh. The authors fed the same data into nine commercia= l
>> geophysical software packages and compared the results; they found=
>> that, "numerical disagreement grows at around the rate of 1% = in
>> average absolute difference per 4000 fines of implemented code, an= d,
>> even worse, the nature of the disagreement is nonrandom" (i.e= ., the
>> authors of different packages make similar mistakes).
>>
>>
>> On 9/26/21, Tim Daly <axiomcas@gmail.com> wrote:
>>> I should note that the lastest board I've just unboxed
>>> (a PYNQ-Z2) is a Zynq Z-7020 chip from Xilinx (AMD).
>>>
>>> What makes it interesting is that it contains 2 hard
>>> core processors and an FPGA, connected by 9 paths
>>> for communication. The processors can be run
>>> independently so there is the possibility of a parallel
>>> version of some Axiom algorithms (assuming I had
>>> the time, which I don't).
>>>
>>> Previously either the hard (physical) processor was
>>> separate from the FPGA with minimal communication
>>> or the soft core processor had to be created in the FPGA
>>> and was much slower.
>>>
>>> Now the two have been combined in a single chip.
>>> That means that my effort to run a proof checker on
>>> the FPGA and the algorithm on the CPU just got to
>>> the point where coordination is much easier.
>>>
>>> Now all I have to do is figure out how to program this
>>> beast.
>>>
>>> There is no such thing as a simple job.
>>>
>>> Tim
>>>
>>>
>>> On 9/26/21, Tim Daly <axiomcas@gmail.com> wrote:
>>>> I'm familiar with most of the traditional approaches >>>> like Theorema. The bibliography contains most of the
>>>> more interesting sources. [0]
>>>>
>>>> There is a difference between traditional approaches to >>>> connecting computer algebra and proofs and my approach. >>>>
>>>> Proving an algorithm, like the GCD, in Axiom is hard.
>>>> There are many GCDs (e.g. NNI vs POLY) and there
>>>> are theorems and proofs passed at runtime in the
>>>> arguments of the newly constructed domains. This
>>>> involves a lot of dependent type theory and issues of
>>>> compile time / runtime argument evaluation. The issues
>>>> that arise are difficult and still being debated in the ty= pe
>>>> theory community.
>>>>
>>>> I am putting the definitions, theorems, and proofs (DTP) >>>> directly into the category/domain hierarchy. Each category=
>>>> will have the DTP specific to it. That way a commutative >>>> domain will inherit a commutative theorem and a
>>>> non-commutative domain will not.
>>>>
>>>> Each domain will have additional DTPs associated with
>>>> the domain (e.g. NNI vs Integer) as well as any DTPs
>>>> it inherits from the category hierarchy. Functions in the<= br> >>>> domain will have associated DTPs.
>>>>
>>>> A function to be proven will then inherit all of the relev= ant
>>>> DTPs. The proof will be attached to the function and
>>>> both will be sent to the hardware (proof-carrying code). >>>>
>>>> The proof checker, running on a field programmable
>>>> gate array (FPGA), will be checked at runtime in
>>>> parallel with the algorithm running on the CPU
>>>> (aka "trust down to the metal"). (Note that Inte= l
>>>> and AMD have built CPU/FPGA combined chips,
>>>> currently only available in the cloud.)
>>>>
>>>>
>>>>
>>>> I am (slowly) making progress on the research.
>>>>
>>>> I have the hardware and nearly have the proof
>>>> checker from LEAN running on my FPGA.
>>>>
>>>> I'm in the process of spreading the DTPs from
>>>> LEAN across the category/domain hierarchy.
>>>>
>>>> The current Axiom build extracts all of the functions
>>>> but does not yet have the DTPs.
>>>>
>>>> I have to restructure the system, including the compiler >>>> and interpreter to parse and inherit the DTPs. I
>>>> have some of that code but only some of the code
>>>> has been pushed to the repository (volume 15) but
>>>> that is rather trivial, out of date, and incomplete.
>>>>
>>>> I'm clearly not smart enough to prove the Risch
>>>> algorithm and its associated machinery but the needed
>>>> definitions and theorems will be available to someone
>>>> who wants to try.
>>>>
>>>> [0] https://github.com/daly/= PDFS/blob/master/bookvolbib.pdf
>>>>
>>>>
>>>> On 8/19/21, Tim Daly <axiomcas@gmail.com> wrote:
>>>>> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
>>>>>
>>>>> REVIEW (Axiom on WSL2 Windows)
>>>>>
>>>>>
>>>>> So the steps to run Axiom from a Windows desktop
>>>>>
>>>>> 1 Windows) install XMing on Windows for X11 server
>>>>>
>>>>> http://www.straightrunning.com/XmingN= otes/
>>>>>
>>>>> 2 WSL2) Install Axiom in WSL2
>>>>>
>>>>> sudo apt install axiom
>>>>>
>>>>> 3 WSL2) modify /usr/bin/axiom to fix the bug:
>>>>> (someone changed the axiom startup script.
>>>>> It won't work on WSL2. I don't know who or
>>>>> how to get it fixed).
>>>>>
>>>>> sudo emacs /usr/bin/axiom
>>>>>
>>>>> (split the line into 3 and add quote marks)
>>>>>
>>>>> export SPADDEFAULT=3D/usr/local/axiom/mnt/linux
>>>>> export AXIOM=3D/usr/lib/axiom-20170501
>>>>> export "PATH=3D/usr/lib/axiom-20170501/bin:$PATH&= quot;
>>>>>
>>>>> 4 WSL2) create a .axiom.input file to include startup = cmds:
>>>>>
>>>>> emacs .axiom.input
>>>>>
>>>>> )cd "/mnt/c/yourpath"
>>>>> )sys pwd
>>>>>
>>>>> 5 WSL2) create a "myaxiom" command that sets= the
>>>>>=C2=A0 =C2=A0 =C2=A0DISPLAY variable and starts axiom >>>>>
>>>>> emacs myaxiom
>>>>>
>>>>> #! /bin/bash
>>>>> export DISPLAY=3D:0.0
>>>>> axiom
>>>>>
>>>>> 6 WSL2) put it in the /usr/bin directory
>>>>>
>>>>> chmod +x myaxiom
>>>>> sudo cp myaxiom /usr/bin/myaxiom
>>>>>
>>>>> 7 WINDOWS) start the X11 server
>>>>>
>>>>> (XMing XLaunch Icon on your desktop)
>>>>>
>>>>> 8 WINDOWS) run myaxiom from PowerShell
>>>>> (this should start axiom with graphics available)
>>>>>
>>>>> wsl myaxiom
>>>>>
>>>>> 8 WINDOWS) make a PowerShell desktop
>>>>>
>>>>> https://superuser.com/questions/886951/run-powershell-script-when-you= -open-powershell
>>>>>
>>>>> Tim
>>>>>
>>>>> On 8/13/21, Tim Daly <axiomcas@gmail.com> wrote:
>>>>>> A great deal of thought is directed toward making = the SANE version
>>>>>> of Axiom as flexible as possible, decoupling mecha= nism from theory.
>>>>>>
>>>>>> An interesting publication by Brian Cantwell Smith= [0], "Reflection
>>>>>> and Semantics in LISP" seems to contain inter= esting ideas related
>>>>>> to our goal. Of particular interest is the ability= to reason about
>>>>>> and
>>>>>> perform self-referential manipulations. In a depen= dently-typed
>>>>>> system it seems interesting to be able "adapt= " code to handle
>>>>>> run-time computed arguments to dependent functions= . The abstract:
>>>>>>
>>>>>>=C2=A0 =C2=A0 "We show how a computational sys= tem can be constructed to
>>>>>> "reason",
>>>>>> effectively
>>>>>>=C2=A0 =C2=A0 and consequentially, about its own in= ferential processes. The
>>>>>> analysis proceeds in two
>>>>>>=C2=A0 =C2=A0 parts. First, we consider the general= question of computational
>>>>>> semantics, rejecting
>>>>>>=C2=A0 =C2=A0 traditional approaches, and arguing t= hat the declarative and
>>>>>> procedural aspects of
>>>>>>=C2=A0 =C2=A0 computational symbols (what they stan= d for, and what behaviour
>>>>>> they
>>>>>> engender) should be
>>>>>>=C2=A0 =C2=A0 analysed independently, in order that= they may be coherently
>>>>>> related. Second, we
>>>>>>=C2=A0 =C2=A0 investigate self-referential behavior= in computational processes,
>>>>>> and show how to embed an
>>>>>>=C2=A0 =C2=A0 effective procedural model of a compu= tational calculus within that
>>>>>> calculus (a model not
>>>>>>=C2=A0 =C2=A0 unlike a meta-circular interpreter, b= ut connected to the
>>>>>> fundamental operations of the
>>>>>>=C2=A0 =C2=A0 machine in such a way as to provide, = at any point in a
>>>>>> computation,
>>>>>> fully articulated
>>>>>>=C2=A0 =C2=A0 descriptions of the state of that com= putation, for inspection and
>>>>>> possible modification). In
>>>>>>=C2=A0 =C2=A0 terms of the theories that result fro= m these investigations, we
>>>>>> present a general architecture
>>>>>>=C2=A0 =C2=A0 for procedurally reflective processes= , able to shift smoothly
>>>>>> between dealing with a given
>>>>>>=C2=A0 =C2=A0 subject domain, and dealing with thei= r own reasoning processes
>>>>>> over
>>>>>> that domain.
>>>>>>
>>>>>>=C2=A0 =C2=A0 An instance of the general solution i= s worked out in the context
>>>>>> of
>>>>>> an applicative
>>>>>>=C2=A0 =C2=A0 language. Specifically, we present th= ree successive dialects of
>>>>>> LISP: 1-LISP, a distillation of
>>>>>>=C2=A0 =C2=A0 current practice, for comparison purp= oses; 2-LISP, a dialect
>>>>>> constructed in terms of our
>>>>>>=C2=A0 =C2=A0 rationalised semantics, in which the = concept of evaluation is
>>>>>> rejected in favour of
>>>>>>=C2=A0 =C2=A0 independent notions of simplification= and reference, and in which
>>>>>> the respective categories
>>>>>>=C2=A0 =C2=A0 of notation, structure, semantics, an= d behaviour are strictly
>>>>>> aligned; and 3-LISP, an
>>>>>>=C2=A0 =C2=A0 extension of 2-LISP endowed with refl= ective powers."
>>>>>>
>>>>>> Axiom SANE builds dependent types on the fly. The = ability to access
>>>>>> both the refection
>>>>>> of the tower of algebra and the reflection of the = tower of proofs at
>>>>>> the time of construction
>>>>>> makes the construction of a new domain or specific= algorithm easier
>>>>>> and more general.
>>>>>>
>>>>>> This is of particular interest because one of the = efforts is to build
>>>>>> "all the way down to the
>>>>>> metal". If each layer is constructed on top o= f previous proven layers
>>>>>> and the new layer
>>>>>> can "reach below" to lower layers then t= he tower of layers can be
>>>>>> built without duplication.
>>>>>>
>>>>>> Tim
>>>>>>
>>>>>> [0], Smith, Brian Cantwell "Reflection and Se= mantics in LISP"
>>>>>> POPL '84: Proceedings of the 11th ACM SIGACT-S= IGPLAN
>>>>>> ymposium on Principles of programming languagesJan= uary 1
>>>>>> 984 Pages 23=E2=80=9335https://doi.org= /10.1145/800017.800513
>>>>>>
>>>>>> On 6/29/21, Tim Daly <axiomcas@gmail.com> wrote:
>>>>>>> Having spent time playing with hardware it is = perfectly clear that
>>>>>>> future computational mathematics efforts need = to adapt to using
>>>>>>> parallel processing.
>>>>>>>
>>>>>>> I've spent a fair bit of time thinking abo= ut structuring Axiom to
>>>>>>> be parallel. Most past efforts have tried to f= ocus on making a
>>>>>>> particular algorithm parallel, such as a matri= x multiply.
>>>>>>>
>>>>>>> But I think that it might be more effective to= make each domain
>>>>>>> run in parallel. A computation crosses multipl= e domains so a
>>>>>>> particular computation could involve multiple = parallel copies.
>>>>>>>
>>>>>>> For example, computing the Cylindrical Algebra= ic Decomposition
>>>>>>> could recursively decompose the plane. Indeed,= any tree-recursive
>>>>>>> algorithm could be run in parallel "in th= e large" by creating new
>>>>>>> running copies of the domain for each sub-prob= lem.
>>>>>>>
>>>>>>> So the question becomes, how does one manage t= his?
>>>>>>>
>>>>>>> A similar problem occurs in robotics where one= could have multiple
>>>>>>> wheels, arms, propellers, etc. that need to ac= t independently but
>>>>>>> in coordination.
>>>>>>>
>>>>>>> The robot solution uses ROS2. The three ideas = are ROSCORE,
>>>>>>> TOPICS with publish/subscribe, and SERVICES wi= th request/response.
>>>>>>> These are communication paths defined between = processes.
>>>>>>>
>>>>>>> ROS2 has a "roscore" which is basica= lly a phonebook of "topics".
>>>>>>> Any process can create or look up the current = active topics. eq:
>>>>>>>
>>>>>>>=C2=A0 =C2=A0 rosnode list
>>>>>>>
>>>>>>> TOPICS:
>>>>>>>
>>>>>>> Any process can PUBLISH a topic (which is basi= cally a typed data
>>>>>>> structure), e.g the topic /hw with the String = data "Hello World".
>>>>>>> eg:
>>>>>>>
>>>>>>>=C2=A0 =C2=A0 rostopic pub /hw std_msgs/String = "Hello, World"
>>>>>>>
>>>>>>> Any process can SUBSCRIBE to a topic, such as = /hw, and get a
>>>>>>> copy of the data.=C2=A0 eg:
>>>>>>>
>>>>>>>=C2=A0 =C2=A0 rostopic echo /hw=C2=A0 =C2=A0=3D= =3D> "Hello, World"
>>>>>>>
>>>>>>> Publishers talk, subscribers listen.
>>>>>>>
>>>>>>>
>>>>>>> SERVICES:
>>>>>>>
>>>>>>> Any process can make a REQUEST of a SERVICE an= d get a RESPONSE.
>>>>>>> This is basically a remote function call.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> Axiom in parallel?
>>>>>>>
>>>>>>> So domains could run, each in its own process.= It could provide
>>>>>>> services, one for each function. Any other pro= cess could request
>>>>>>> a computation and get the result as a response= . Domains could
>>>>>>> request services from other domains, either wa= iting for responses
>>>>>>> or continuing while the response is being comp= uted.
>>>>>>>
>>>>>>> The output could be sent anywhere, to a termin= al, to a browser,
>>>>>>> to a network, or to another process using the = publish/subscribe
>>>>>>> protocol, potentially all at the same time sin= ce there can be many
>>>>>>> subscribers to a topic.
>>>>>>>
>>>>>>> Available domains could be dynamically added b= y announcing
>>>>>>> themselves as new "topics" and could= be dynamically looked-up
>>>>>>> at runtime.
>>>>>>>
>>>>>>> This structure allows function-level / domain-= level parallelism.
>>>>>>> It is very effective in the robot world and I = think it might be a
>>>>>>> good structuring mechanism to allow computatio= nal mathematics
>>>>>>> to take advantage of multiple processors in a = disciplined fashion.
>>>>>>>
>>>>>>> Axiom has a thousand domains and each could ru= n on its own core.
>>>>>>>
>>>>>>> In addition. notice that each domain is indepe= ndent of the others.
>>>>>>> So if we want to use BLAS Fortran code, it cou= ld just be another
>>>>>>> service node. In fact, any "foreign funct= ion" could transparently
>>>>>>> cooperate in a distributed Axiom.
>>>>>>>
>>>>>>> Another key feature is that proofs can be &quo= t;by node".
>>>>>>>
>>>>>>> Tim
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On 6/5/21, Tim Daly <axiomcas@gmail.com> wrote:
>>>>>>>> Axiom is based on first-class dependent ty= pes. Deciding when
>>>>>>>> two types are equivalent may involve compu= tation. See
>>>>>>>> Christiansen, David Thrane "Checking = Dependent Types with
>>>>>>>> Normalization by Evaluation" (2019) >>>>>>>>
>>>>>>>> This puts an interesting constraint on bui= lding types. The
>>>>>>>> constructed types has to export a function= to decide if a
>>>>>>>> given type is "equivalent" to it= self.
>>>>>>>>
>>>>>>>> The notion of "equivalence" migh= t involve category ideas
>>>>>>>> of natural transformation and univalence. = Sigh.
>>>>>>>>
>>>>>>>> That's an interesting design point. >>>>>>>>
>>>>>>>> Tim
>>>>>>>>
>>>>>>>>
>>>>>>>> On 5/5/21, Tim Daly <axiomcas@gmail.com> wrote:
>>>>>>>>> It is interesting that programmer'= s eyes and expectations adapt
>>>>>>>>> to the tools they use. For instance, I= use emacs and expect to
>>>>>>>>> work directly in files and multiple bu= ffers. When I try to use one
>>>>>>>>> of the many IDE tools I find they tend= to "get in the way". I
>>>>>>>>> already
>>>>>>>>> know or can quickly find whatever they= try to tell me. If you use
>>>>>>>>> an
>>>>>>>>> IDE you probably find emacs "too = sparse" for programming.
>>>>>>>>>
>>>>>>>>> Recently I've been working in a sp= arse programming environment.
>>>>>>>>> I'm exploring the question of runn= ing a proof checker in an FPGA.
>>>>>>>>> The FPGA development tools are painful= at best and not intuitive
>>>>>>>>> since you SEEM to be programming but y= ou're actually describing
>>>>>>>>> hardware gates, connections, and timin= g. This is an environment
>>>>>>>>> where everything happens all-at-once a= nd all-the-time (like the
>>>>>>>>> circuits in your computer). It is the = "assembly language of
>>>>>>>>> circuits".
>>>>>>>>> Naturally, my eyes have adapted to thi= s rather raw level.
>>>>>>>>>
>>>>>>>>> That said, I'm normally doing lite= rate programming all the time.
>>>>>>>>> My typical file is a document which is= a mixture of latex and
>>>>>>>>> lisp.
>>>>>>>>> It is something of a shock to return t= o that world. It is clear
>>>>>>>>> why
>>>>>>>>> people who program in Python find lisp= to be a "sea of parens".
>>>>>>>>> Yet as a lisp programmer, I don't = even see the parens, just code.
>>>>>>>>>
>>>>>>>>> It takes a few minutes in a literate d= ocument to adapt vision to
>>>>>>>>> see the latex / lisp combination as na= tural. The latex markup,
>>>>>>>>> like the lisp parens, eventually just = disappears. What remains
>>>>>>>>> is just lisp and natural language text= .
>>>>>>>>>
>>>>>>>>> This seems painful at first but eyes q= uickly adapt. The upside
>>>>>>>>> is that there is always a "finish= ed" document that describes the
>>>>>>>>> state of the code. The overhead of wri= ting a paragraph to
>>>>>>>>> describe a new function or change a pa= ragraph to describe the
>>>>>>>>> changed function is very small.
>>>>>>>>>
>>>>>>>>> Using a Makefile I latex the document = to generate a current PDF
>>>>>>>>> and then I extract, load, and execute = the code. This loop catches
>>>>>>>>> errors in both the latex and the sourc= e code. Keeping an open file
>>>>>>>>> in
>>>>>>>>> my pdf viewer shows all of the changes= in the document after every
>>>>>>>>> run of make. That way I can edit the b= ook as easily as the code.
>>>>>>>>>
>>>>>>>>> Ultimately I find that writing the boo= k while writing the code is
>>>>>>>>> more productive. I don't have to r= emember why I wrote something
>>>>>>>>> since the explanation is already there= .
>>>>>>>>>
>>>>>>>>> We all have our own way of programming= and our own tools.
>>>>>>>>> But I find literate programming to be = a real advance over IDE
>>>>>>>>> style programming and "raw code&q= uot; programming.
>>>>>>>>>
>>>>>>>>> Tim
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On 2/27/21, Tim Daly <axiomcas@gmail.com> wrote= :
>>>>>>>>>> The systems I use have the interes= ting property of
>>>>>>>>>> "Living within the compiler&q= uot;.
>>>>>>>>>>
>>>>>>>>>> Lisp, Forth, Emacs, and other syst= ems that present themselves
>>>>>>>>>> through the Read-Eval-Print-Loop (= REPL) allow the
>>>>>>>>>> ability to deeply interact with th= e system, shaping it to your
>>>>>>>>>> need.
>>>>>>>>>>
>>>>>>>>>> My current thread of study is soft= ware architecture. See
>>>>>>>>>> https://www.youtube.com/watch?v=3DW2hagw1VhhI&feature=3Dyoutu.= be
>>>>>>>>>> and https://www.geor= gefairbanks.com/videos/
>>>>>>>>>>
>>>>>>>>>> My current thinking on SANE involv= es the ability to
>>>>>>>>>> dynamically define categories, rep= resentations, and functions
>>>>>>>>>> along with "composition funct= ions" that permits choosing a
>>>>>>>>>> combination at the time of use. >>>>>>>>>>
>>>>>>>>>> You might want a domain for handli= ng polynomials. There are
>>>>>>>>>> a lot of choices, depending on you= r use case. You might want
>>>>>>>>>> different representations. For exa= mple, you might want dense,
>>>>>>>>>> sparse, recursive, or "machin= e compatible fixnums" (e.g. to
>>>>>>>>>> interface with C code). If these d= on't exist it ought to be
>>>>>>>>>> possible
>>>>>>>>>> to create them. Such "lego-li= ke" building blocks require careful
>>>>>>>>>> thought about creating "fully= factored" objects.
>>>>>>>>>>
>>>>>>>>>> Given that goal, the traditional b= arrier of "compiler" vs
>>>>>>>>>> "interpreter"
>>>>>>>>>> does not seem useful. It is better= to "live within the compiler"
>>>>>>>>>> which
>>>>>>>>>> gives the ability to define new th= ings "on the fly".
>>>>>>>>>>
>>>>>>>>>> Of course, the SANE compiler is go= ing to want an associated
>>>>>>>>>> proof of the functions you create = along with the other parts
>>>>>>>>>> such as its category hierarchy and= representation properties.
>>>>>>>>>>
>>>>>>>>>> There is no such thing as a simple= job. :-)
>>>>>>>>>>
>>>>>>>>>> Tim
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On 2/18/21, Tim Daly <axiomcas@gmail.com>= wrote:
>>>>>>>>>>> The Axiom SANE compiler / inte= rpreter has a few design points.
>>>>>>>>>>>
>>>>>>>>>>> 1) It needs to mix interpreted= and compiled code in the same
>>>>>>>>>>> function.
>>>>>>>>>>> SANE allows dynamic constructi= on of code as well as dynamic type
>>>>>>>>>>> construction at runtime. Both = of these can occur in a runtime
>>>>>>>>>>> object.
>>>>>>>>>>> So there is potentially a mixt= ure of interpreted and compiled
>>>>>>>>>>> code.
>>>>>>>>>>>
>>>>>>>>>>> 2) It needs to perform type re= solution at compile time without
>>>>>>>>>>> overhead
>>>>>>>>>>> where possible. Since this is = not always possible there needs to
>>>>>>>>>>> be
>>>>>>>>>>> a "prefix thunk" tha= t will perform the resolution. Trivially,
>>>>>>>>>>> for
>>>>>>>>>>> example,
>>>>>>>>>>> if we have a + function we nee= d to type-resolve the arguments.
>>>>>>>>>>>
>>>>>>>>>>> However, if we can prove at co= mpile time that the types are both
>>>>>>>>>>> bounded-NNI and the result is = bounded-NNI (i.e. fixnum in lisp)
>>>>>>>>>>> then we can inline a call to += at runtime. If not, we might have
>>>>>>>>>>> + applied to NNI and POLY(FLOA= T), which requires a thunk to
>>>>>>>>>>> resolve types. The thunk could= even "specialize and compile"
>>>>>>>>>>> the code before executing it.<= br> >>>>>>>>>>>
>>>>>>>>>>> It turns out that the Forth im= plementation of
>>>>>>>>>>> "threaded-interpreted&quo= t;
>>>>>>>>>>> languages model provides an ef= ficient and effective way to do
>>>>>>>>>>> this.[0]
>>>>>>>>>>> Type resolution can be "i= nserted" in intermediate thunks.
>>>>>>>>>>> The model also supports dynami= c overloading and tail recursion.
>>>>>>>>>>>
>>>>>>>>>>> Combining high-level CLOS code= with low-level threading gives an
>>>>>>>>>>> easy to understand and robust = design.
>>>>>>>>>>>
>>>>>>>>>>> Tim
>>>>>>>>>>>
>>>>>>>>>>> [0] Loeliger, R.G. "Threa= ded Interpretive Languages" (1981)
>>>>>>>>>>> ISBN 0-07-038360-X
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On 2/5/21, Tim Daly <axiomcas@gmail.com>= ; wrote:
>>>>>>>>>>>> I've worked hard to ma= ke Axiom depend on almost no other
>>>>>>>>>>>> tools so that it would not= get caught by "code rot" of
>>>>>>>>>>>> libraries.
>>>>>>>>>>>>
>>>>>>>>>>>> However, I'm also tryi= ng to make the new SANE version much
>>>>>>>>>>>> easier to understand and d= ebug.To that end I've been
>>>>>>>>>>>> experimenting
>>>>>>>>>>>> with some ideas.
>>>>>>>>>>>>
>>>>>>>>>>>> It should be possible to v= iew source code, of course. But the
>>>>>>>>>>>> source
>>>>>>>>>>>> code is not the only, nor = possibly the best, representation of
>>>>>>>>>>>> the
>>>>>>>>>>>> ideas.
>>>>>>>>>>>> In particular, source code= gets compiled into data structures.
>>>>>>>>>>>> In
>>>>>>>>>>>> Axiom
>>>>>>>>>>>> these data structures real= ly are a graph of related structures.
>>>>>>>>>>>>
>>>>>>>>>>>> For example, looking at th= e gcd function from NNI, there is the
>>>>>>>>>>>> representation of the gcd = function itself. But there is also a
>>>>>>>>>>>> structure
>>>>>>>>>>>> that is the REP (and, in t= he new system, is separate from the
>>>>>>>>>>>> domain).
>>>>>>>>>>>>
>>>>>>>>>>>> Further, there are associa= ted specification and proof
>>>>>>>>>>>> structures.
>>>>>>>>>>>> Even
>>>>>>>>>>>> further, the domain inheri= ts the category structures, and from
>>>>>>>>>>>> those
>>>>>>>>>>>> it
>>>>>>>>>>>> inherits logical axioms an= d definitions through the proof
>>>>>>>>>>>> structure.
>>>>>>>>>>>>
>>>>>>>>>>>> Clearly the gcd function i= s a node in a much larger graph
>>>>>>>>>>>> structure.
>>>>>>>>>>>>
>>>>>>>>>>>> When trying to decide why = code won't compile it would be useful
>>>>>>>>>>>> to
>>>>>>>>>>>> be able to see and walk th= ese structures. I've thought about
>>>>>>>>>>>> using
>>>>>>>>>>>> the
>>>>>>>>>>>> browser but browsers are t= oo weak. Either everything has to be
>>>>>>>>>>>> "in
>>>>>>>>>>>> a
>>>>>>>>>>>> single tab to show the gra= ph" or "the nodes of the graph are in
>>>>>>>>>>>> different
>>>>>>>>>>>> tabs". Plus, construc= ting dynamic graphs that change as the
>>>>>>>>>>>> software
>>>>>>>>>>>> changes (e.g. by loading a= new spad file or creating a new
>>>>>>>>>>>> function)
>>>>>>>>>>>> represents the huge proble= m of keeping the browser "in sync
>>>>>>>>>>>> with
>>>>>>>>>>>> the
>>>>>>>>>>>> Axiom workspace". So = something more dynamic and embedded is
>>>>>>>>>>>> needed.
>>>>>>>>>>>>
>>>>>>>>>>>> Axiom source gets compiled= into CLOS data structures. Each of
>>>>>>>>>>>> these
>>>>>>>>>>>> new SANE structures has an= associated surface representation,
>>>>>>>>>>>> so
>>>>>>>>>>>> they
>>>>>>>>>>>> can be presented in user-f= riendly form.
>>>>>>>>>>>>
>>>>>>>>>>>> Also, since Axiom is liter= ate software, it should be possible
>>>>>>>>>>>> to
>>>>>>>>>>>> look
>>>>>>>>>>>> at
>>>>>>>>>>>> the code in its literate f= orm with the surrounding explanation.
>>>>>>>>>>>>
>>>>>>>>>>>> Essentially we'd like = to have the ability to "deep dive" into
>>>>>>>>>>>> the
>>>>>>>>>>>> Axiom
>>>>>>>>>>>> workspace, not only for de= bugging, but also for understanding
>>>>>>>>>>>> what
>>>>>>>>>>>> functions are used, where = they come from, what they inherit,
>>>>>>>>>>>> and
>>>>>>>>>>>> how they are used in a com= putation.
>>>>>>>>>>>>
>>>>>>>>>>>> To that end I'm lookin= g at using McClim, a lisp windowing
>>>>>>>>>>>> system.
>>>>>>>>>>>> Since the McClim windows w= ould be part of the lisp image, they
>>>>>>>>>>>> have
>>>>>>>>>>>> access to display (and mod= ify) the Axiom workspace at all
>>>>>>>>>>>> times.
>>>>>>>>>>>>
>>>>>>>>>>>> The only hesitation is tha= t McClim uses quicklisp and drags in
>>>>>>>>>>>> a
>>>>>>>>>>>> lot
>>>>>>>>>>>> of other subsystems. It= 9;s all lisp, of course.
>>>>>>>>>>>>
>>>>>>>>>>>> These ideas aren't new= . They were available on Symbolics
>>>>>>>>>>>> machines,
>>>>>>>>>>>> a truly productive platfor= m and one I sorely miss.
>>>>>>>>>>>>
>>>>>>>>>>>> Tim
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> On 1/19/21, Tim Daly <<= a href=3D"mailto:axiomcas@gmail.com" target=3D"_blank">axiomcas@gmail.com> wrote:
>>>>>>>>>>>>> Also of interest is th= e talk
>>>>>>>>>>>>> "The Unreasonable= Effectiveness of Dynamic Typing for
>>>>>>>>>>>>> Practical
>>>>>>>>>>>>> Programs"
>>>>>>>>>>>>> https://vimeo.com/743= 54480
>>>>>>>>>>>>> which questions whethe= r static typing really has any benefit.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Tim
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> On 1/19/21, Tim Daly &= lt;axiomcas@gmail.c= om> wrote:
>>>>>>>>>>>>>> Peter Naur wrote a= n article of interest:
>>>>>>>>>>>>>> htt= p://pages.cs.wisc.edu/~remzi/Naur.pdf
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> In particular, it = mirrors my notion that Axiom needs
>>>>>>>>>>>>>> to embrace literat= e programming so that the "theory
>>>>>>>>>>>>>> of the problem&quo= t; is presented as well as the "theory
>>>>>>>>>>>>>> of the solution&qu= ot;. I quote the introduction:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> This article is, t= o my mind, the most accurate account
>>>>>>>>>>>>>> of what goes on in= designing and coding a program.
>>>>>>>>>>>>>> I refer to it regu= larly when discussing how much
>>>>>>>>>>>>>> documentation to c= reate, how to pass along tacit
>>>>>>>>>>>>>> knowledge, and the= value of the XP's metaphor-setting
>>>>>>>>>>>>>> exercise. It also = provides a way to examine a methodolgy's
>>>>>>>>>>>>>> economic structure= .
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> In the article, wh= ich follows, note that the quality of the
>>>>>>>>>>>>>> designing programm= er's work is related to the quality of
>>>>>>>>>>>>>> the match between = his theory of the problem and his theory
>>>>>>>>>>>>>> of the solution. N= ote that the quality of a later
>>>>>>>>>>>>>> programmer's >>>>>>>>>>>>>> work is related to= the match between his theories and the
>>>>>>>>>>>>>> previous programme= r's theories.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Using Naur's i= deas, the designer's job is not to pass along
>>>>>>>>>>>>>> "the design&q= uot; but to pass along "the theories" driving the
>>>>>>>>>>>>>> design.
>>>>>>>>>>>>>> The latter goal is= more useful and more appropriate. It also
>>>>>>>>>>>>>> highlights that kn= owledge of the theory is tacit in the
>>>>>>>>>>>>>> owning,
>>>>>>>>>>>>>> and
>>>>>>>>>>>>>> so passing along t= he thoery requires passing along both
>>>>>>>>>>>>>> explicit
>>>>>>>>>>>>>> and tacit knowledg= e.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Tim
>>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>
--000000000000cf249d05da14f883-- From MAILER-DAEMON Wed Mar 23 01:53:28 2022 Received: from list by lists.gnu.org with archive (Exim 4.90_1) id 1nWtw0-0003bR-LJ for mharc-axiom-developer@gnu.org; Wed, 23 Mar 2022 01:53:28 -0400 Received: from eggs.gnu.org ([209.51.188.92]:47714) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1nWtvy-0003bE-VL for axiom-developer@nongnu.org; Wed, 23 Mar 2022 01:53:27 -0400 Received: from [2607:f8b0:4864:20::832] (port=33627 helo=mail-qt1-x832.google.com) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1nWtvq-0002vu-TF for axiom-developer@nongnu.org; Wed, 23 Mar 2022 01:53:26 -0400 Received: by mail-qt1-x832.google.com with SMTP id j21so414741qta.0 for ; Tue, 22 Mar 2022 22:53:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to; bh=lJN22Y96R2q76x1slvSyWFcjW1wjIcK7YGZQMiSqIQM=; b=IF6JE0y/88+JdhCVlVWnmiX57H4on9jv53PkwxFrQS4IKFK0AUqjL1ZpxuLMyOQOMd VxwP+aVNJZiHEuQBYhHV8WQaVZ7OFBRPmkMO9h2a6jxC2ioGyZeAInUNSYHNI15rfaIq x0DUgVvtq/e+rPxPD/7EV8LRUasQv6++wofzfbvHDsZT5+KIdBdqW6O0DhX/FdMsdSIW sAD9r9a2F5PUX210KHdmhusiJ2NEwL7bqyxaeIdqBEEmWlrqHMvQ5TSfrgkKyO3k+ZpX 4yGf9H7H0WGyw2sdYYAUhGUmRX6IJ/kk1uJR7X/EZvmcJYGr8w5LxiMEvmce148bkm05 CtNA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to; bh=lJN22Y96R2q76x1slvSyWFcjW1wjIcK7YGZQMiSqIQM=; b=FfRkQQH1Nawr4XGiP4xpiH6HzuoLdCBvCZ+PYKWDUQhF6/3oWuFrGT5EnI0weI54OX M2hUOyUnlNySJwEw7S07P9jD8WTa760OTBInJmOmzRm5VljR7PxtJ50sQKQXQoSvjNCf UDL7i0kLUVQ7dwHYZpflG+P+XrpDUJORExf/V1U0BlXs6iZPuI6MJG8dUj5pkJm+AKqF B5bPmU96k2m2bBVe4+6IKGF4s667uCUPzieRvv5IljJ83DG87st5UC1eYlqsn57WbmcW bzZVv6x7ufr35yya0MBAhH5q/TzUhQKXdYs3x9ccRY7jtkIxfjyXN5EIM7LNOyBFTcR+ i75Q== X-Gm-Message-State: AOAM530CKWPGlETyB14rXQHg6MeBvVt7eaRWeeuTH9p/aX7cBd0Zuqox qeUH9AAUmz7U9KtidSiKUbzjJ6v/c/vIwOoY7atEbSs6qjyKfg== X-Google-Smtp-Source: ABdhPJyMqnUBaHJFr6+ysFe5XOC61hsltmma/8pA3Bl8uKdy3YJ3G4o116TOEx9QeFiZqAX5FB0hMTrfAzEZwfSu2pk= X-Received: by 2002:a05:622a:488:b0:2e1:d60f:164b with SMTP id p8-20020a05622a048800b002e1d60f164bmr22840003qtx.281.1648014797161; Tue, 22 Mar 2022 22:53:17 -0700 (PDT) MIME-Version: 1.0 References: <1601071096648.17321@ccny.cuny.edu> <20200928114929.GA1113@math.uni.wroc.pl> <1607842080069.93596@ccny.cuny.edu> <1607845528644.76141@ccny.cuny.edu> In-Reply-To: From: Tim Daly Date: Wed, 23 Mar 2022 01:53:03 -0400 Message-ID: Subject: Re: Axiom musings... To: axiom-dev , Tim Daly Content-Type: multipart/alternative; boundary="00000000000091c88405dadc582b" X-Host-Lookup-Failed: Reverse DNS lookup failed for 2607:f8b0:4864:20::832 (failed) Received-SPF: pass client-ip=2607:f8b0:4864:20::832; envelope-from=axiomcas@gmail.com; helo=mail-qt1-x832.google.com X-Spam_score_int: 6 X-Spam_score: 0.6 X-Spam_bar: / X-Spam_report: (0.6 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, FREEMAIL_FROM=0.001, HTML_MESSAGE=0.001, PDS_HP_HELO_NORDNS=0.659, RCVD_IN_DNSWL_NONE=-0.0001, RDNS_NONE=0.793, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01, URI_DOTEDU=1.246 autolearn=no autolearn_force=no X-Spam_action: no action X-BeenThere: axiom-developer@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Axiom Developers List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 23 Mar 2022 05:53:27 -0000 --00000000000091c88405dadc582b Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable I have a deep interest in self-modifying programs. These are trivial to create in lisp. The question is how to structure a computer algebra program so it could be dynamically modified but still well-structured. This is especially important since the user can create new logical types at runtime. One innovation, which I have never seen anywhere, is to structure the program in a spreadsheet fashion. Spreadsheets cells can reference other spreadsheet cells. Spreadsheets have a well-defined evaluation method. This is equivalent to a form of object-oriented programming where each cell is an object and has a well-defined inheritance hierarchy through other cells. Simple manipulation allows insertion, deletion, or modification of these chains at any time. So by structuring a computer algebra program like a spreadsheet I am able to dynamically adjust a program to fit the problem to be solved, track which cells are needed, and optimize the program. Spreadsheets do this naturally but I'm unaware of any non-spreadsheet program that relies on that organization. Tim On Sun, Mar 13, 2022 at 4:01 AM Tim Daly wrote: > Axiom has an awkward 'attributes' category structure. > > In the SANE version it is clear that these attributes are much > closer to logic 'definitions'. As a result one of the changes > is to create a new 'category'-type structure for definitions. > There will be a new keyword, like the category keyword, > 'definition'. > > Tim > > > On Fri, Mar 11, 2022 at 9:46 AM Tim Daly wrote: > >> The github lockout continues... >> >> I'm spending some time adding examples to source code. >> >> Any function can have ++X comments added. These will >> appear as examples when the function is )display For example, >> in PermutationGroup there is a function 'strongGenerators' >> defined as: >> >> strongGenerators : % -> L PERM S >> ++ strongGenerators(gp) returns strong generators for >> ++ the group gp. >> ++ >> ++X S:List(Integer) :=3D [1,2,3,4] >> ++X G :=3D symmetricGroup(S) >> ++X strongGenerators(G) >> >> >> >> Later, in the interpreter we see: >> >> >> >> >> )d op strongGenerators >> >> There is one exposed function called strongGenerators : >> [1] PermutationGroup(D2) -> List(Permutation(D2)) from >> PermutationGroup(D2) >> if D2 has SETCAT >> >> Examples of strongGenerators from PermutationGroup >> >> S:List(Integer) :=3D [1,2,3,4] >> G :=3D symmetricGroup(S) >> strongGenerators(G) >> >> >> >> >> This will show a working example for functions that the >> user can copy and use. It is especially useful to show how >> to construct working arguments. >> >> These "example" functions are run at build time when >> the make command looks like >> make TESTSET=3Dalltests >> >> I hope to add this documentation to all Axiom functions. >> >> In addition, the plan is to add these function calls to the >> usual test documentation. That means that all of these examples >> will be run and show their output in the final distribution >> (mnt/ubuntu/doc/src/input/*.dvi files) so the user can view >> the expected output. >> >> Tim >> >> >> >> >> On Fri, Feb 25, 2022 at 6:05 PM Tim Daly wrote: >> >>> It turns out that creating SPAD-looking output is trivial >>> in Common Lisp. Each class can have a custom print >>> routine so signatures and ++ comments can each be >>> printed with their own format. >>> >>> To ensure that I maintain compatibility I'll be printing >>> the categories and domains so they look like SPAD code, >>> at least until I get the proof technology integrated. I will >>> probably specialize the proof printers to look like the >>> original LEAN proof syntax. >>> >>> Internally, however, it will all be Common Lisp. >>> >>> Common Lisp makes so many desirable features so easy. >>> It is possible to trace dynamically at any level. One could >>> even write a trace that showed how Axiom arrived at the >>> solution. Any domain could have special case output syntax >>> without affecting any other domain so one could write a >>> tree-like output for proofs. Using greek characters is trivial >>> so the input and output notation is more mathematical. >>> >>> Tim >>> >>> >>> On Thu, Feb 24, 2022 at 10:24 AM Tim Daly wrote: >>> >>>> Axiom's SPAD code compiles to Common Lisp. >>>> The AKCL version of Common Lisp compiles to C. >>>> Three languages and 2 compilers is a lot to maintain. >>>> Further, there are very few people able to write SPAD >>>> and even fewer people able to maintain it. >>>> >>>> I've decided that the SANE version of Axiom will be >>>> implemented in pure Common Lisp. I've outlined Axiom's >>>> category / type hierarchy in the Common Lisp Object >>>> System (CLOS). I am now experimenting with re-writing >>>> the functions into Common Lisp. >>>> >>>> This will have several long-term effects. It simplifies >>>> the implementation issues. SPAD code blocks a lot of >>>> actions and optimizations that Common Lisp provides. >>>> The Common Lisp language has many more people >>>> who can read, modify, and maintain code. It provides for >>>> interoperability with other Common Lisp projects with >>>> no effort. Common Lisp is an international standard >>>> which ensures that the code will continue to run. >>>> >>>> The input / output mathematics will remain the same. >>>> Indeed, with the new generalizations for first-class >>>> dependent types it will be more general. >>>> >>>> This is a big change, similar to eliminating BOOT code >>>> and moving to Literate Programming. This will provide a >>>> better platform for future research work. Current research >>>> is focused on merging Axiom's computer algebra mathematics >>>> with Lean's proof language. The goal is to create a system for >>>> "computational mathematics". >>>> >>>> Research is the whole point of Axiom. >>>> >>>> Tim >>>> >>>> >>>> >>>> On Sat, Jan 22, 2022 at 9:16 PM Tim Daly wrote: >>>> >>>>> I can't stress enough how important it is to listen to Hamming's talk >>>>> https://www.youtube.com/watch?v=3Da1zDuOPkMSw >>>>> >>>>> Axiom will begin to die the day I stop working on it. >>>>> >>>>> However, proving Axiom correct "down to the metal", is fundamental. >>>>> It will merge computer algebra and logic, spawning years of new >>>>> research. >>>>> >>>>> Work on fundamental problems. >>>>> >>>>> Tim >>>>> >>>>> >>>>> On Thu, Dec 30, 2021 at 6:46 PM Tim Daly wrote: >>>>> >>>>>> One of the interesting questions when obtaining a result >>>>>> is "what functions were called and what was their return value?" >>>>>> Otherwise known as the "show your work" idea. >>>>>> >>>>>> There is an idea called the "writer monad" [0], usually >>>>>> implemented to facilitate logging. We can exploit this >>>>>> idea to provide "show your work" capability. Each function >>>>>> can provide this information inside the monad enabling the >>>>>> question to be answered at any time. >>>>>> >>>>>> For those unfamiliar with the monad idea, the best explanation >>>>>> I've found is this video [1]. >>>>>> >>>>>> Tim >>>>>> >>>>>> [0] Deriving the writer monad from first principles >>>>>> https://williamyaoh.com/posts/2020-07-26-deriving-writer-monad.html >>>>>> >>>>>> [1] The Absolute Best Intro to Monads for Software Engineers >>>>>> https://www.youtube.com/watch?v=3DC2w45qRc3aU >>>>>> >>>>>> On Mon, Dec 13, 2021 at 12:30 AM Tim Daly wrote= : >>>>>> >>>>>>> ...(snip)... >>>>>>> >>>>>>> Common Lisp has an "open compiler". That allows the ability >>>>>>> to deeply modify compiler behavior using compiler macros >>>>>>> and macros in general. CLOS takes advantage of this to add >>>>>>> typed behavior into the compiler in a way that ALLOWS strict >>>>>>> typing such as found in constructive type theory and ML. >>>>>>> Judgments, ala Crary, are front-and-center. >>>>>>> >>>>>>> Whether you USE the discipline afforded is the real question. >>>>>>> >>>>>>> Indeed, the Axiom research struggle is essentially one of how >>>>>>> to have a disciplined use of first-class dependent types. The >>>>>>> struggle raises issues of, for example, compiling a dependent >>>>>>> type whose argument is recursive in the compiled type. Since >>>>>>> the new type is first-class it can be constructed at what you >>>>>>> improperly call "run-time". However, it appears that the recursive >>>>>>> type may have to call the compiler at each recursion to generate >>>>>>> the next step since in some cases it cannot generate "closed code". >>>>>>> >>>>>>> I am embedding proofs (in LEAN language) into the type >>>>>>> hierarchy so that theorems, which depend on the type hierarchy, >>>>>>> are correctly inherited. The compiler has to check the proofs of >>>>>>> functions >>>>>>> at compile time using these. Hacking up nonsense just won't cut it. >>>>>>> Think >>>>>>> of the problem of embedding LEAN proofs in ML or ML in LEAN. >>>>>>> (Actually, Jeremy Avigad might find that research interesting.) >>>>>>> >>>>>>> So Matrix(3,3,Float) has inverses (assuming Float is a >>>>>>> field (cough)). The type inherits this theorem and proofs of >>>>>>> functions can use this. But Matrix(3,4,Integer) does not have >>>>>>> inverses so the proofs cannot use this. The type hierarchy has >>>>>>> to ensure that the proper theorems get inherited. >>>>>>> >>>>>>> Making proof technology work at compile time is hard. >>>>>>> (Worse yet, LEAN is a moving target. Sigh.) >>>>>>> >>>>>>> >>>>>>> >>>>>>> On Thu, Nov 25, 2021 at 9:43 AM Tim Daly wrote= : >>>>>>> >>>>>>>> >>>>>>>> As you know I've been re-architecting Axiom to use first class >>>>>>>> dependent types and proving the algorithms correct. For example, >>>>>>>> the GCD of natural numbers or the GCD of polynomials. >>>>>>>> >>>>>>>> The idea involves "boxing up" the proof with the algorithm (aka >>>>>>>> proof carrying code) in the ELF file (under a crypto hash so it >>>>>>>> can't be changed). >>>>>>>> >>>>>>>> Once the code is running on the CPU, the proof is run in parallel >>>>>>>> on the field programmable gate array (FPGA). Intel data center >>>>>>>> servers have CPUs with built-in FPGAs these days. >>>>>>>> >>>>>>>> There is a bit of a disconnect, though. The GCD code is compiled >>>>>>>> machine code but the proof is LEAN-level. >>>>>>>> >>>>>>>> What would be ideal is if the compiler not only compiled the GCD >>>>>>>> code to machine code, it also compiled the proof to "machine code"= . >>>>>>>> That is, for each machine instruction, the FPGA proof checker >>>>>>>> would ensure that the proof was not violated at the individual >>>>>>>> instruction level. >>>>>>>> >>>>>>>> What does it mean to "compile a proof to the machine code level"? >>>>>>>> >>>>>>>> The Milawa effort (Myre14.pdf) does incremental proofs in layers. >>>>>>>> To quote from the article [0]: >>>>>>>> >>>>>>>> We begin with a simple proof checker, call it A, which is short >>>>>>>> enough to verify by the ``social process'' of mathematics -- an= d >>>>>>>> more recently with a theorem prover for a more expressive logic. >>>>>>>> >>>>>>>> We then develop a series of increasingly powerful proof checker= s, >>>>>>>> call the B, C, D, and so on. We show each of these programs only >>>>>>>> accepts the same formulas as A, using A to verify B, and B to >>>>>>>> verify >>>>>>>> C, and so on. Then, since we trust A, and A says B is >>>>>>>> trustworthy, we >>>>>>>> can trust B. Then, since we trust B, and B says C is >>>>>>>> trustworthy, we >>>>>>>> can trust C. >>>>>>>> >>>>>>>> This gives a technique for "compiling the proof" down the the >>>>>>>> machine >>>>>>>> code level. Ideally, the compiler would have judgments for each >>>>>>>> step of >>>>>>>> the compilation so that each compile step has a justification. I >>>>>>>> don't >>>>>>>> know of any compiler that does this yet. (References welcome). >>>>>>>> >>>>>>>> At the machine code level, there are techniques that would allow >>>>>>>> the FPGA proof to "step in sequence" with the executing code. >>>>>>>> Some work has been done on using "Hoare Logic for Realistically >>>>>>>> Modelled Machine Code" (paper attached, Myre07a.pdf), >>>>>>>> "Decompilation into Logic -- Improved (Myre12a.pdf). >>>>>>>> >>>>>>>> So the game is to construct a GCD over some type (Nats, Polys, etc= . >>>>>>>> Axiom has 22), compile the dependent type GCD to machine code. >>>>>>>> In parallel, the proof of the code is compiled to machine code. Th= e >>>>>>>> pair is sent to the CPU/FPGA and, while the algorithm runs, the FP= GA >>>>>>>> ensures the proof is not violated, instruction by instruction. >>>>>>>> >>>>>>>> (I'm ignoring machine architecture issues such pipelining, >>>>>>>> out-of-order, >>>>>>>> branch prediction, and other machine-level things to ponder. I'm >>>>>>>> looking >>>>>>>> at the RISC-V Verilog details by various people to understand >>>>>>>> better but >>>>>>>> it is still a "misty fog" for me.) >>>>>>>> >>>>>>>> The result is proven code "down to the metal". >>>>>>>> >>>>>>>> Tim >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> [0] >>>>>>>> https://www.cs.utexas.edu/users/moore/acl2/manuals/current/manual/= index-seo.php/ACL2____MILAWA >>>>>>>> >>>>>>>> On Thu, Nov 25, 2021 at 6:05 AM Tim Daly >>>>>>>> wrote: >>>>>>>> >>>>>>>>> >>>>>>>>> As you know I've been re-architecting Axiom to use first class >>>>>>>>> dependent types and proving the algorithms correct. For example, >>>>>>>>> the GCD of natural numbers or the GCD of polynomials. >>>>>>>>> >>>>>>>>> The idea involves "boxing up" the proof with the algorithm (aka >>>>>>>>> proof carrying code) in the ELF file (under a crypto hash so it >>>>>>>>> can't be changed). >>>>>>>>> >>>>>>>>> Once the code is running on the CPU, the proof is run in parallel >>>>>>>>> on the field programmable gate array (FPGA). Intel data center >>>>>>>>> servers have CPUs with built-in FPGAs these days. >>>>>>>>> >>>>>>>>> There is a bit of a disconnect, though. The GCD code is compiled >>>>>>>>> machine code but the proof is LEAN-level. >>>>>>>>> >>>>>>>>> What would be ideal is if the compiler not only compiled the GCD >>>>>>>>> code to machine code, it also compiled the proof to "machine code= ". >>>>>>>>> That is, for each machine instruction, the FPGA proof checker >>>>>>>>> would ensure that the proof was not violated at the individual >>>>>>>>> instruction level. >>>>>>>>> >>>>>>>>> What does it mean to "compile a proof to the machine code level"? >>>>>>>>> >>>>>>>>> The Milawa effort (Myre14.pdf) does incremental proofs in layers. >>>>>>>>> To quote from the article [0]: >>>>>>>>> >>>>>>>>> We begin with a simple proof checker, call it A, which is shor= t >>>>>>>>> enough to verify by the ``social process'' of mathematics -- a= nd >>>>>>>>> more recently with a theorem prover for a more expressive logic= . >>>>>>>>> >>>>>>>>> We then develop a series of increasingly powerful proof >>>>>>>>> checkers, >>>>>>>>> call the B, C, D, and so on. We show each of these programs onl= y >>>>>>>>> accepts the same formulas as A, using A to verify B, and B to >>>>>>>>> verify >>>>>>>>> C, and so on. Then, since we trust A, and A says B is >>>>>>>>> trustworthy, we >>>>>>>>> can trust B. Then, since we trust B, and B says C is >>>>>>>>> trustworthy, we >>>>>>>>> can trust C. >>>>>>>>> >>>>>>>>> This gives a technique for "compiling the proof" down the the >>>>>>>>> machine >>>>>>>>> code level. Ideally, the compiler would have judgments for each >>>>>>>>> step of >>>>>>>>> the compilation so that each compile step has a justification. I >>>>>>>>> don't >>>>>>>>> know of any compiler that does this yet. (References welcome). >>>>>>>>> >>>>>>>>> At the machine code level, there are techniques that would allow >>>>>>>>> the FPGA proof to "step in sequence" with the executing code. >>>>>>>>> Some work has been done on using "Hoare Logic for Realistically >>>>>>>>> Modelled Machine Code" (paper attached, Myre07a.pdf), >>>>>>>>> "Decompilation into Logic -- Improved (Myre12a.pdf). >>>>>>>>> >>>>>>>>> So the game is to construct a GCD over some type (Nats, Polys, et= c. >>>>>>>>> Axiom has 22), compile the dependent type GCD to machine code. >>>>>>>>> In parallel, the proof of the code is compiled to machine code. T= he >>>>>>>>> pair is sent to the CPU/FPGA and, while the algorithm runs, the >>>>>>>>> FPGA >>>>>>>>> ensures the proof is not violated, instruction by instruction. >>>>>>>>> >>>>>>>>> (I'm ignoring machine architecture issues such pipelining, >>>>>>>>> out-of-order, >>>>>>>>> branch prediction, and other machine-level things to ponder. I'm >>>>>>>>> looking >>>>>>>>> at the RISC-V Verilog details by various people to understand >>>>>>>>> better but >>>>>>>>> it is still a "misty fog" for me.) >>>>>>>>> >>>>>>>>> The result is proven code "down to the metal". >>>>>>>>> >>>>>>>>> Tim >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> [0] >>>>>>>>> https://www.cs.utexas.edu/users/moore/acl2/manuals/current/manual= /index-seo.php/ACL2____MILAWA >>>>>>>>> >>>>>>>>> On Sat, Nov 13, 2021 at 5:28 PM Tim Daly >>>>>>>>> wrote: >>>>>>>>> >>>>>>>>>> Full support for general, first-class dependent types requires >>>>>>>>>> some changes to the Axiom design. That implies some language >>>>>>>>>> design questions. >>>>>>>>>> >>>>>>>>>> Given that mathematics is such a general subject with a lot of >>>>>>>>>> "local" notation and ideas (witness logical judgment notation) >>>>>>>>>> careful thought is needed to design a language that is able to >>>>>>>>>> handle a wide range. >>>>>>>>>> >>>>>>>>>> Normally language design is a two-level process. The language >>>>>>>>>> designer creates a language and then an implementation. Various >>>>>>>>>> design choices affect the final language. >>>>>>>>>> >>>>>>>>>> There is "The Metaobject Protocol" (MOP) >>>>>>>>>> >>>>>>>>>> https://www.amazon.com/Art-Metaobject-Protocol-Gregor-Kiczales/d= p/0262610744 >>>>>>>>>> which encourages a three-level process. The language designer >>>>>>>>>> works at a Metalevel to design a family of languages, then the >>>>>>>>>> language specializations, then the implementation. A MOP design >>>>>>>>>> allows the language user to optimize the language to their >>>>>>>>>> problem. >>>>>>>>>> >>>>>>>>>> A simple paper on the subject is "Metaobject Protocols" >>>>>>>>>> https://users.cs.duke.edu/~vahdat/ps/mop.pdf >>>>>>>>>> >>>>>>>>>> Tim >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> On Mon, Oct 25, 2021 at 7:42 PM Tim Daly >>>>>>>>>> wrote: >>>>>>>>>> >>>>>>>>>>> I have a separate thread of research on Self-Replicating System= s >>>>>>>>>>> (ref: Kinematics of Self Reproducing Machines >>>>>>>>>>> http://www.molecularassembler.com/KSRM.htm) >>>>>>>>>>> >>>>>>>>>>> which led to watching "Strange Dreams of Stranger Loops" by Wil= l >>>>>>>>>>> Byrd >>>>>>>>>>> https://www.youtube.com/watch?v=3DAffW-7ika0E >>>>>>>>>>> >>>>>>>>>>> Will referenced a PhD Thesis by Jon Doyle >>>>>>>>>>> "A Model for Deliberation, Action, and Introspection" >>>>>>>>>>> >>>>>>>>>>> I also read the thesis by J.C.G. Sturdy >>>>>>>>>>> "A Lisp through the Looking Glass" >>>>>>>>>>> >>>>>>>>>>> Self-replication requires the ability to manipulate your own >>>>>>>>>>> representation in such a way that changes to that representatio= n >>>>>>>>>>> will change behavior. >>>>>>>>>>> >>>>>>>>>>> This leads to two thoughts in the SANE research. >>>>>>>>>>> >>>>>>>>>>> First, "Declarative Representation". That is, most of the thing= s >>>>>>>>>>> about the representation should be declarative rather than >>>>>>>>>>> procedural. Applying this idea as much as possible makes it >>>>>>>>>>> easier to understand and manipulate. >>>>>>>>>>> >>>>>>>>>>> Second, "Explicit Call Stack". Function calls form an implicit >>>>>>>>>>> call stack. This can usually be displayed in a running lisp >>>>>>>>>>> system. >>>>>>>>>>> However, having the call stack explicitly available would mean >>>>>>>>>>> that a system could "introspect" at the first-class level. >>>>>>>>>>> >>>>>>>>>>> These two ideas would make it easy, for example, to let the >>>>>>>>>>> system "show the work". One of the normal complaints is that >>>>>>>>>>> a system presents an answer but there is no way to know how >>>>>>>>>>> that answer was derived. These two ideas make it possible to >>>>>>>>>>> understand, display, and even post-answer manipulate >>>>>>>>>>> the intermediate steps. >>>>>>>>>>> >>>>>>>>>>> Having the intermediate steps also allows proofs to be >>>>>>>>>>> inserted in a step-by-step fashion. This aids the effort to >>>>>>>>>>> have proofs run in parallel with computation at the hardware >>>>>>>>>>> level. >>>>>>>>>>> >>>>>>>>>>> Tim >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> On Thu, Oct 21, 2021 at 9:50 AM Tim Daly >>>>>>>>>>> wrote: >>>>>>>>>>> >>>>>>>>>>>> So the current struggle involves the categories in Axiom. >>>>>>>>>>>> >>>>>>>>>>>> The categories and domains constructed using categories >>>>>>>>>>>> are dependent types. When are dependent types "equal"? >>>>>>>>>>>> Well, hummmm, that depends on the arguments to the >>>>>>>>>>>> constructor. >>>>>>>>>>>> >>>>>>>>>>>> But in order to decide(?) equality we have to evaluate >>>>>>>>>>>> the arguments (which themselves can be dependent types). >>>>>>>>>>>> Indeed, we may, and in general, we must evaluate the >>>>>>>>>>>> arguments at compile time (well, "construction time" as >>>>>>>>>>>> there isn't really a compiler / interpreter separation anymore= .) >>>>>>>>>>>> >>>>>>>>>>>> That raises the question of what "equality" means. This >>>>>>>>>>>> is not simply a "set equality" relation. It falls into the >>>>>>>>>>>> infinite-groupoid of homotopy type theory. In general >>>>>>>>>>>> it appears that deciding category / domain equivalence >>>>>>>>>>>> might force us to climb the type hierarchy. >>>>>>>>>>>> >>>>>>>>>>>> Beyond that, there is the question of "which proof" >>>>>>>>>>>> applies to the resulting object. Proofs depend on their >>>>>>>>>>>> assumptions which might be different for different >>>>>>>>>>>> constructions. As yet I have no clue how to "index" >>>>>>>>>>>> proofs based on their assumptions, nor how to >>>>>>>>>>>> connect these assumptions to the groupoid structure. >>>>>>>>>>>> >>>>>>>>>>>> My brain hurts. >>>>>>>>>>>> >>>>>>>>>>>> Tim >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> On Mon, Oct 18, 2021 at 2:00 AM Tim Daly >>>>>>>>>>>> wrote: >>>>>>>>>>>> >>>>>>>>>>>>> "Birthing Computational Mathematics" >>>>>>>>>>>>> >>>>>>>>>>>>> The Axiom SANE project is difficult at a very fundamental >>>>>>>>>>>>> level. The title "SANE" was chosen due to the various >>>>>>>>>>>>> words found in a thesuarus... "rational", "coherent", >>>>>>>>>>>>> "judicious" and "sound". >>>>>>>>>>>>> >>>>>>>>>>>>> These are very high level, amorphous ideas. But so is >>>>>>>>>>>>> the design of SANE. Breaking away from tradition in >>>>>>>>>>>>> computer algebra, type theory, and proof assistants >>>>>>>>>>>>> is very difficult. Ideas tend to fall into standard jargon >>>>>>>>>>>>> which limits both the frame of thinking (e.g. dependent >>>>>>>>>>>>> types) and the content (e.g. notation). >>>>>>>>>>>>> >>>>>>>>>>>>> Questioning both frame and content is very difficult. >>>>>>>>>>>>> It is hard to even recognize when they are accepted >>>>>>>>>>>>> "by default" rather than "by choice". What does the idea >>>>>>>>>>>>> "power tools" mean in a primitive, hand labor culture? >>>>>>>>>>>>> >>>>>>>>>>>>> Christopher Alexander [0] addresses this problem in >>>>>>>>>>>>> a lot of his writing. Specifically, in his book "Notes on >>>>>>>>>>>>> the Synthesis of Form", in his chapter 5 "The Selfconsious >>>>>>>>>>>>> Process", he addresses this problem directly. This is a >>>>>>>>>>>>> "must read" book. >>>>>>>>>>>>> >>>>>>>>>>>>> Unlike building design and contruction, however, there >>>>>>>>>>>>> are almost no constraints to use as guides. Alexander >>>>>>>>>>>>> quotes Plato's Phaedrus: >>>>>>>>>>>>> >>>>>>>>>>>>> "First, the taking in of scattered particulars under >>>>>>>>>>>>> one Idea, so that everyone understands what is being >>>>>>>>>>>>> talked about ... Second, the separation of the Idea >>>>>>>>>>>>> into parts, by dividing it at the joints, as nature >>>>>>>>>>>>> directs, not breaking any limb in half as a bad >>>>>>>>>>>>> carver might." >>>>>>>>>>>>> >>>>>>>>>>>>> Lisp, which has been called "clay for the mind" can >>>>>>>>>>>>> build virtually anything that can be thought. The >>>>>>>>>>>>> "joints" are also "of one's choosing" so one is >>>>>>>>>>>>> both carver and "nature". >>>>>>>>>>>>> >>>>>>>>>>>>> Clearly the problem is no longer "the tools". >>>>>>>>>>>>> *I* am the problem constraining the solution. >>>>>>>>>>>>> Birthing this "new thing" is slow, difficult, and >>>>>>>>>>>>> uncertain at best. >>>>>>>>>>>>> >>>>>>>>>>>>> Tim >>>>>>>>>>>>> >>>>>>>>>>>>> [0] Alexander, Christopher "Notes on the Synthesis >>>>>>>>>>>>> of Form" Harvard University Press 1964 >>>>>>>>>>>>> ISBN 0-674-62751-2 >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> On Sun, Oct 10, 2021 at 4:40 PM Tim Daly >>>>>>>>>>>>> wrote: >>>>>>>>>>>>> >>>>>>>>>>>>>> Re: writing a paper... I'm not connected to Academia >>>>>>>>>>>>>> so anything I'd write would never make it into print. >>>>>>>>>>>>>> >>>>>>>>>>>>>> "Language level parsing" is still a long way off. The talk >>>>>>>>>>>>>> by Guy Steele [2] highlights some of the problems we >>>>>>>>>>>>>> currently face using mathematical metanotation. >>>>>>>>>>>>>> >>>>>>>>>>>>>> For example, a professor I know at CCNY (City College >>>>>>>>>>>>>> of New York) didn't understand Platzer's "funny >>>>>>>>>>>>>> fraction notation" (proof judgements) despite being >>>>>>>>>>>>>> an expert in Platzer's differential equations area. >>>>>>>>>>>>>> >>>>>>>>>>>>>> Notation matters and is not widely common. >>>>>>>>>>>>>> >>>>>>>>>>>>>> I spoke to Professor Black (in LTI) about using natural >>>>>>>>>>>>>> language in the limited task of a human-robot cooperation >>>>>>>>>>>>>> in changing a car tire. I looked at the current machine >>>>>>>>>>>>>> learning efforts. They are no where near anything but >>>>>>>>>>>>>> toy systems, taking too long to train and are too fragile. >>>>>>>>>>>>>> >>>>>>>>>>>>>> Instead I ended up using a combination of AIML [3] >>>>>>>>>>>>>> (Artificial Intelligence Markup Language), the ALICE >>>>>>>>>>>>>> Chatbot [4], Forgy's OPS5 rule based program [5], >>>>>>>>>>>>>> and Fahlman's SCONE [6] knowledge base. It was >>>>>>>>>>>>>> much less fragile in my limited domain problem. >>>>>>>>>>>>>> >>>>>>>>>>>>>> I have no idea how to extend any system to deal with >>>>>>>>>>>>>> even undergraduate mathematics parsing. >>>>>>>>>>>>>> >>>>>>>>>>>>>> Nor do I have any idea how I would embed LEAN >>>>>>>>>>>>>> knowledge into a SCONE database, although I >>>>>>>>>>>>>> think the combination would be useful and interesting. >>>>>>>>>>>>>> >>>>>>>>>>>>>> I do believe that, in the limited area of computational >>>>>>>>>>>>>> mathematics, we are capable of building robust, proven >>>>>>>>>>>>>> systems that are quite general and extensible. As you >>>>>>>>>>>>>> might have guessed I've given it a lot of thought over >>>>>>>>>>>>>> the years :-) >>>>>>>>>>>>>> >>>>>>>>>>>>>> A mathematical language seems to need >6 components >>>>>>>>>>>>>> >>>>>>>>>>>>>> 1) We need some sort of a specification language, possibly >>>>>>>>>>>>>> somewhat 'propositional' that introduces the assumptions >>>>>>>>>>>>>> you mentioned (ref. your discussion of numbers being >>>>>>>>>>>>>> abstract and ref. your discussion of relevant choice of >>>>>>>>>>>>>> assumptions related to a problem). >>>>>>>>>>>>>> >>>>>>>>>>>>>> This is starting to show up in the hardware area (e.g. >>>>>>>>>>>>>> Lamport's TLC[0]) >>>>>>>>>>>>>> >>>>>>>>>>>>>> Of course, specifications relate to proving programs >>>>>>>>>>>>>> and, as you recall, I got a cold reception from the >>>>>>>>>>>>>> LEAN community about using LEAN for program proofs. >>>>>>>>>>>>>> >>>>>>>>>>>>>> 2) We need "scaffolding". That is, we need a theory >>>>>>>>>>>>>> that can be reduced to some implementable form >>>>>>>>>>>>>> that provides concept-level structure. >>>>>>>>>>>>>> >>>>>>>>>>>>>> Axiom uses group theory for this. Axiom's "category" >>>>>>>>>>>>>> structure has "Category" things like Ring. Claiming >>>>>>>>>>>>>> to be a Ring brings in a lot of "Signatures" of functions >>>>>>>>>>>>>> you have to implement to properly be a Ring. >>>>>>>>>>>>>> >>>>>>>>>>>>>> Scaffolding provides a firm mathematical basis for >>>>>>>>>>>>>> design. It provides a link between the concept of a >>>>>>>>>>>>>> Ring and the expectations you can assume when >>>>>>>>>>>>>> you claim your "Domain" "is a Ring". Category >>>>>>>>>>>>>> theory might provide similar structural scaffolding >>>>>>>>>>>>>> (eventually... I'm still working on that thought garden) >>>>>>>>>>>>>> >>>>>>>>>>>>>> LEAN ought to have a textbook(s?) that structures >>>>>>>>>>>>>> the world around some form of mathematics. It isn't >>>>>>>>>>>>>> sufficient to say "undergraduate math" is the goal. >>>>>>>>>>>>>> There needs to be some coherent organization so >>>>>>>>>>>>>> people can bring ideas like Group Theory to the >>>>>>>>>>>>>> organization. Which brings me to ... >>>>>>>>>>>>>> >>>>>>>>>>>>>> 3) We need "spreading". That is, we need to take >>>>>>>>>>>>>> the various definitions and theorems in LEAN and >>>>>>>>>>>>>> place them in their proper place in the scaffold. >>>>>>>>>>>>>> >>>>>>>>>>>>>> For example, the Ring category needs the definitions >>>>>>>>>>>>>> and theorems for a Ring included in the code for the >>>>>>>>>>>>>> Ring category. Similarly, the Commutative category >>>>>>>>>>>>>> needs the definitions and theorems that underlie >>>>>>>>>>>>>> "commutative" included in the code. >>>>>>>>>>>>>> >>>>>>>>>>>>>> That way, when you claim to be a "Commutative Ring" >>>>>>>>>>>>>> you get both sets of definitions and theorems. That is, >>>>>>>>>>>>>> the inheritance mechanism will collect up all of the >>>>>>>>>>>>>> definitions and theorems and make them available >>>>>>>>>>>>>> for proofs. >>>>>>>>>>>>>> >>>>>>>>>>>>>> I am looking at LEAN's definitions and theorems with >>>>>>>>>>>>>> an eye to "spreading" them into the group scaffold of >>>>>>>>>>>>>> Axiom. >>>>>>>>>>>>>> >>>>>>>>>>>>>> 4) We need "carriers" (Axiom calls them representations, >>>>>>>>>>>>>> aka "REP"). REPs allow data structures to be defined >>>>>>>>>>>>>> independent of the implementation. >>>>>>>>>>>>>> >>>>>>>>>>>>>> For example, Axiom can construct Polynomials that >>>>>>>>>>>>>> have their coefficients in various forms of representation. >>>>>>>>>>>>>> You can define "dense" (all coefficients in a list), >>>>>>>>>>>>>> "sparse" (only non-zero coefficients), "recursive", etc. >>>>>>>>>>>>>> >>>>>>>>>>>>>> A "dense polynomial" and a "sparse polynomial" work >>>>>>>>>>>>>> exactly the same way as far as the user is concerned. >>>>>>>>>>>>>> They both implement the same set of functions. There >>>>>>>>>>>>>> is only a difference of representation for efficiency and >>>>>>>>>>>>>> this only affects the implementation of the functions, >>>>>>>>>>>>>> not their use. >>>>>>>>>>>>>> >>>>>>>>>>>>>> Axiom "got this wrong" because it didn't sufficiently >>>>>>>>>>>>>> separate the REP from the "Domain". I plan to fix this. >>>>>>>>>>>>>> >>>>>>>>>>>>>> LEAN ought to have a "data structures" subtree that >>>>>>>>>>>>>> has all of the definitions and axioms for all of the >>>>>>>>>>>>>> existing data structures (e.g. Red-Black trees). This >>>>>>>>>>>>>> would be a good undergraduate project. >>>>>>>>>>>>>> >>>>>>>>>>>>>> 5) We need "Domains" (in Axiom speak). That is, we >>>>>>>>>>>>>> need a box that holds all of the functions that implement >>>>>>>>>>>>>> a "Domain". For example, a "Polynomial Domain" would >>>>>>>>>>>>>> hold all of the functions for manipulating polynomials >>>>>>>>>>>>>> (e.g polynomial multiplication). The "Domain" box >>>>>>>>>>>>>> is a dependent type that: >>>>>>>>>>>>>> >>>>>>>>>>>>>> A) has an argument list of "Categories" that this "Domain" >>>>>>>>>>>>>> box inherits. Thus, the "Integer Domain" inherits >>>>>>>>>>>>>> the definitions and axioms from "Commutative" >>>>>>>>>>>>>> >>>>>>>>>>>>>> Functions in the "Domain" box can now assume >>>>>>>>>>>>>> and use the properties of being commutative. Proofs >>>>>>>>>>>>>> of functions in this domain can use the definitions >>>>>>>>>>>>>> and proofs about being commutative. >>>>>>>>>>>>>> >>>>>>>>>>>>>> B) contains an argument that specifies the "REP" >>>>>>>>>>>>>> (aka, the carrier). That way you get all of the >>>>>>>>>>>>>> functions associated with the data structure >>>>>>>>>>>>>> available for use in the implementation. >>>>>>>>>>>>>> >>>>>>>>>>>>>> Functions in the Domain box can use all of >>>>>>>>>>>>>> the definitions and axioms about the representation >>>>>>>>>>>>>> (e.g. NonNegativeIntegers are always positive) >>>>>>>>>>>>>> >>>>>>>>>>>>>> C) contains local "spread" definitions and axioms >>>>>>>>>>>>>> that can be used in function proofs. >>>>>>>>>>>>>> >>>>>>>>>>>>>> For example, a "Square Matrix" domain would >>>>>>>>>>>>>> have local axioms that state that the matrix is >>>>>>>>>>>>>> always square. Thus, functions in that box could >>>>>>>>>>>>>> use these additional definitions and axioms in >>>>>>>>>>>>>> function proofs. >>>>>>>>>>>>>> >>>>>>>>>>>>>> D) contains local state. A "Square Matrix" domain >>>>>>>>>>>>>> would be constructed as a dependent type that >>>>>>>>>>>>>> specified the size of the square (e.g. a 2x2 >>>>>>>>>>>>>> matrix would have '2' as a dependent parameter. >>>>>>>>>>>>>> >>>>>>>>>>>>>> E) contains implementations of inherited functions. >>>>>>>>>>>>>> >>>>>>>>>>>>>> A "Category" could have a signature for a GCD >>>>>>>>>>>>>> function and the "Category" could have a default >>>>>>>>>>>>>> implementation. However, the "Domain" could >>>>>>>>>>>>>> have a locally more efficient implementation which >>>>>>>>>>>>>> overrides the inherited implementation. >>>>>>>>>>>>>> >>>>>>>>>>>>>> Axiom has about 20 GCD implementations that >>>>>>>>>>>>>> differ locally from the default in the category. They >>>>>>>>>>>>>> use properties known locally to be more efficient. >>>>>>>>>>>>>> >>>>>>>>>>>>>> F) contains local function signatures. >>>>>>>>>>>>>> >>>>>>>>>>>>>> A "Domain" gives the user more and more unique >>>>>>>>>>>>>> functions. The signature have associated >>>>>>>>>>>>>> "pre- and post- conditions" that can be used >>>>>>>>>>>>>> as assumptions in the function proofs. >>>>>>>>>>>>>> >>>>>>>>>>>>>> Some of the user-available functions are only >>>>>>>>>>>>>> visible if the dependent type would allow them >>>>>>>>>>>>>> to exist. For example, a general Matrix domain >>>>>>>>>>>>>> would have fewer user functions that a Square >>>>>>>>>>>>>> Matrix domain. >>>>>>>>>>>>>> >>>>>>>>>>>>>> In addition, local "helper" functions need their >>>>>>>>>>>>>> own signatures that are not user visible. >>>>>>>>>>>>>> >>>>>>>>>>>>>> G) the function implementation for each signature. >>>>>>>>>>>>>> >>>>>>>>>>>>>> This is obviously where all the magic happens >>>>>>>>>>>>>> >>>>>>>>>>>>>> H) the proof of each function. >>>>>>>>>>>>>> >>>>>>>>>>>>>> This is where I'm using LEAN. >>>>>>>>>>>>>> >>>>>>>>>>>>>> Every function has a proof. That proof can use >>>>>>>>>>>>>> all of the definitions and axioms inherited from >>>>>>>>>>>>>> the "Category", "Representation", the "Domain >>>>>>>>>>>>>> Local", and the signature pre- and post- >>>>>>>>>>>>>> conditions. >>>>>>>>>>>>>> >>>>>>>>>>>>>> I) literature links. Algorithms must contain a link >>>>>>>>>>>>>> to at least one literature reference. Of course, >>>>>>>>>>>>>> since everything I do is a Literate Program >>>>>>>>>>>>>> this is obviously required. Knuth said so :-) >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> LEAN ought to have "books" or "pamphlets" that >>>>>>>>>>>>>> bring together all of this information for a domain >>>>>>>>>>>>>> such as Square Matrices. That way a user can >>>>>>>>>>>>>> find all of the related ideas, available functions, >>>>>>>>>>>>>> and their corresponding proofs in one place. >>>>>>>>>>>>>> >>>>>>>>>>>>>> 6) User level presentation. >>>>>>>>>>>>>> >>>>>>>>>>>>>> This is where the systems can differ significantly. >>>>>>>>>>>>>> Axiom and LEAN both have GCD but they use >>>>>>>>>>>>>> that for different purposes. >>>>>>>>>>>>>> >>>>>>>>>>>>>> I'm trying to connect LEAN's GCD and Axiom's GCD >>>>>>>>>>>>>> so there is a "computational mathematics" idea that >>>>>>>>>>>>>> allows the user to connect proofs and implementations. >>>>>>>>>>>>>> >>>>>>>>>>>>>> 7) Trust >>>>>>>>>>>>>> >>>>>>>>>>>>>> Unlike everything else, computational mathematics >>>>>>>>>>>>>> can have proven code that gives various guarantees. >>>>>>>>>>>>>> >>>>>>>>>>>>>> I have been working on this aspect for a while. >>>>>>>>>>>>>> I refer to it as trust "down to the metal" The idea is >>>>>>>>>>>>>> that a proof of the GCD function and the implementation >>>>>>>>>>>>>> of the GCD function get packaged into the ELF format. >>>>>>>>>>>>>> (proof carrying code). When the GCD algorithm executes >>>>>>>>>>>>>> on the CPU, the GCD proof is run through the LEAN >>>>>>>>>>>>>> proof checker on an FPGA in parallel. >>>>>>>>>>>>>> >>>>>>>>>>>>>> (I just recently got a PYNQ Xilinx board [1] with a CPU >>>>>>>>>>>>>> and FPGA together. I'm trying to implement the LEAN >>>>>>>>>>>>>> proof checker on the FPGA). >>>>>>>>>>>>>> >>>>>>>>>>>>>> We are on the cusp of a revolution in computational >>>>>>>>>>>>>> mathematics. But the two pillars (proof and computer >>>>>>>>>>>>>> algebra) need to get know each other. >>>>>>>>>>>>>> >>>>>>>>>>>>>> Tim >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> [0] Lamport, Leslie "Chapter on TLA+" >>>>>>>>>>>>>> in "Software Specification Methods" >>>>>>>>>>>>>> https://www.springer.com/gp/book/9781852333539 >>>>>>>>>>>>>> (I no longer have CMU library access or I'd send you >>>>>>>>>>>>>> the book PDF) >>>>>>>>>>>>>> >>>>>>>>>>>>>> [1] https://www.tul.com.tw/productspynq-z2.html >>>>>>>>>>>>>> >>>>>>>>>>>>>> [2] https://www.youtube.com/watch?v=3DdCuZkaaou0Q >>>>>>>>>>>>>> >>>>>>>>>>>>>> [3] "ARTIFICIAL INTELLIGENCE MARKUP LANGUAGE" >>>>>>>>>>>>>> https://arxiv.org/pdf/1307.3091.pdf >>>>>>>>>>>>>> >>>>>>>>>>>>>> [4] ALICE Chatbot >>>>>>>>>>>>>> >>>>>>>>>>>>>> http://www.scielo.org.mx/pdf/cys/v19n4/1405-5546-cys-19-04-0= 0625.pdf >>>>>>>>>>>>>> >>>>>>>>>>>>>> [5] OPS5 User Manual >>>>>>>>>>>>>> >>>>>>>>>>>>>> https://kilthub.cmu.edu/articles/journal_contribution/OPS5_u= ser_s_manual/6608090/1 >>>>>>>>>>>>>> >>>>>>>>>>>>>> [6] Scott Fahlman "SCONE" >>>>>>>>>>>>>> http://www.cs.cmu.edu/~sef/scone/ >>>>>>>>>>>>>> >>>>>>>>>>>>>> On 9/27/21, Tim Daly wrote: >>>>>>>>>>>>>> > I have tried to maintain a list of names of people who hav= e >>>>>>>>>>>>>> > helped Axiom, going all the way back to the pre-Scratchpad >>>>>>>>>>>>>> > days. The names are listed at the beginning of each book. >>>>>>>>>>>>>> > I also maintain a bibliography of publications I've read o= r >>>>>>>>>>>>>> > that have had an indirect influence on Axiom. >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > Credit is "the coin of the realm". It is easy to share and >>>>>>>>>>>>>> wrong >>>>>>>>>>>>>> > to ignore. It is especially damaging to those in Academia >>>>>>>>>>>>>> who >>>>>>>>>>>>>> > are affected by credit and citations in publications. >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > Apparently I'm not the only person who feels that way. The >>>>>>>>>>>>>> ACM >>>>>>>>>>>>>> > Turing award seems to have ignored a lot of work: >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > Scientific Integrity, the 2021 Turing Lecture, and the 201= 8 >>>>>>>>>>>>>> Turing >>>>>>>>>>>>>> > Award for Deep Learning >>>>>>>>>>>>>> > >>>>>>>>>>>>>> https://people.idsia.ch/~juergen/scientific-integrity-turing= -award-deep-learning.html >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > I worked on an AI problem at IBM Research called Ketazolam= . >>>>>>>>>>>>>> > (https://en.wikipedia.org/wiki/Ketazolam). The idea was to >>>>>>>>>>>>>> recognize >>>>>>>>>>>>>> > and associated 3D chemical drawings with their drug >>>>>>>>>>>>>> counterparts. >>>>>>>>>>>>>> > I used Rumelhart, and McClelland's books. These books >>>>>>>>>>>>>> contained >>>>>>>>>>>>>> > quite a few ideas that seem to be "new and innovative" >>>>>>>>>>>>>> among the >>>>>>>>>>>>>> > machine learning crowd... but the books are from 1987. I >>>>>>>>>>>>>> don't believe >>>>>>>>>>>>>> > I've seen these books mentioned in any recent bibliography= . >>>>>>>>>>>>>> > >>>>>>>>>>>>>> https://mitpress.mit.edu/books/parallel-distributed-processi= ng-volume-1 >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > On 9/27/21, Tim Daly wrote: >>>>>>>>>>>>>> >> Greg Wilson asked "How Reliable is Scientific Software?" >>>>>>>>>>>>>> >> >>>>>>>>>>>>>> https://neverworkintheory.org/2021/09/25/how-reliable-is-sci= entific-software.html >>>>>>>>>>>>>> >> >>>>>>>>>>>>>> >> which is a really interesting read. For example" >>>>>>>>>>>>>> >> >>>>>>>>>>>>>> >> [Hatton1994], is now a quarter of a century old, but its >>>>>>>>>>>>>> conclusions >>>>>>>>>>>>>> >> are still fresh. The authors fed the same data into nine >>>>>>>>>>>>>> commercial >>>>>>>>>>>>>> >> geophysical software packages and compared the results; >>>>>>>>>>>>>> they found >>>>>>>>>>>>>> >> that, "numerical disagreement grows at around the rate of >>>>>>>>>>>>>> 1% in >>>>>>>>>>>>>> >> average absolute difference per 4000 fines of implemented >>>>>>>>>>>>>> code, and, >>>>>>>>>>>>>> >> even worse, the nature of the disagreement is nonrandom" >>>>>>>>>>>>>> (i.e., the >>>>>>>>>>>>>> >> authors of different packages make similar mistakes). >>>>>>>>>>>>>> >> >>>>>>>>>>>>>> >> >>>>>>>>>>>>>> >> On 9/26/21, Tim Daly wrote: >>>>>>>>>>>>>> >>> I should note that the lastest board I've just unboxed >>>>>>>>>>>>>> >>> (a PYNQ-Z2) is a Zynq Z-7020 chip from Xilinx (AMD). >>>>>>>>>>>>>> >>> >>>>>>>>>>>>>> >>> What makes it interesting is that it contains 2 hard >>>>>>>>>>>>>> >>> core processors and an FPGA, connected by 9 paths >>>>>>>>>>>>>> >>> for communication. The processors can be run >>>>>>>>>>>>>> >>> independently so there is the possibility of a parallel >>>>>>>>>>>>>> >>> version of some Axiom algorithms (assuming I had >>>>>>>>>>>>>> >>> the time, which I don't). >>>>>>>>>>>>>> >>> >>>>>>>>>>>>>> >>> Previously either the hard (physical) processor was >>>>>>>>>>>>>> >>> separate from the FPGA with minimal communication >>>>>>>>>>>>>> >>> or the soft core processor had to be created in the FPGA >>>>>>>>>>>>>> >>> and was much slower. >>>>>>>>>>>>>> >>> >>>>>>>>>>>>>> >>> Now the two have been combined in a single chip. >>>>>>>>>>>>>> >>> That means that my effort to run a proof checker on >>>>>>>>>>>>>> >>> the FPGA and the algorithm on the CPU just got to >>>>>>>>>>>>>> >>> the point where coordination is much easier. >>>>>>>>>>>>>> >>> >>>>>>>>>>>>>> >>> Now all I have to do is figure out how to program this >>>>>>>>>>>>>> >>> beast. >>>>>>>>>>>>>> >>> >>>>>>>>>>>>>> >>> There is no such thing as a simple job. >>>>>>>>>>>>>> >>> >>>>>>>>>>>>>> >>> Tim >>>>>>>>>>>>>> >>> >>>>>>>>>>>>>> >>> >>>>>>>>>>>>>> >>> On 9/26/21, Tim Daly wrote: >>>>>>>>>>>>>> >>>> I'm familiar with most of the traditional approaches >>>>>>>>>>>>>> >>>> like Theorema. The bibliography contains most of the >>>>>>>>>>>>>> >>>> more interesting sources. [0] >>>>>>>>>>>>>> >>>> >>>>>>>>>>>>>> >>>> There is a difference between traditional approaches to >>>>>>>>>>>>>> >>>> connecting computer algebra and proofs and my approach. >>>>>>>>>>>>>> >>>> >>>>>>>>>>>>>> >>>> Proving an algorithm, like the GCD, in Axiom is hard. >>>>>>>>>>>>>> >>>> There are many GCDs (e.g. NNI vs POLY) and there >>>>>>>>>>>>>> >>>> are theorems and proofs passed at runtime in the >>>>>>>>>>>>>> >>>> arguments of the newly constructed domains. This >>>>>>>>>>>>>> >>>> involves a lot of dependent type theory and issues of >>>>>>>>>>>>>> >>>> compile time / runtime argument evaluation. The issues >>>>>>>>>>>>>> >>>> that arise are difficult and still being debated in the >>>>>>>>>>>>>> type >>>>>>>>>>>>>> >>>> theory community. >>>>>>>>>>>>>> >>>> >>>>>>>>>>>>>> >>>> I am putting the definitions, theorems, and proofs (DTP= ) >>>>>>>>>>>>>> >>>> directly into the category/domain hierarchy. Each >>>>>>>>>>>>>> category >>>>>>>>>>>>>> >>>> will have the DTP specific to it. That way a commutativ= e >>>>>>>>>>>>>> >>>> domain will inherit a commutative theorem and a >>>>>>>>>>>>>> >>>> non-commutative domain will not. >>>>>>>>>>>>>> >>>> >>>>>>>>>>>>>> >>>> Each domain will have additional DTPs associated with >>>>>>>>>>>>>> >>>> the domain (e.g. NNI vs Integer) as well as any DTPs >>>>>>>>>>>>>> >>>> it inherits from the category hierarchy. Functions in t= he >>>>>>>>>>>>>> >>>> domain will have associated DTPs. >>>>>>>>>>>>>> >>>> >>>>>>>>>>>>>> >>>> A function to be proven will then inherit all of the >>>>>>>>>>>>>> relevant >>>>>>>>>>>>>> >>>> DTPs. The proof will be attached to the function and >>>>>>>>>>>>>> >>>> both will be sent to the hardware (proof-carrying code)= . >>>>>>>>>>>>>> >>>> >>>>>>>>>>>>>> >>>> The proof checker, running on a field programmable >>>>>>>>>>>>>> >>>> gate array (FPGA), will be checked at runtime in >>>>>>>>>>>>>> >>>> parallel with the algorithm running on the CPU >>>>>>>>>>>>>> >>>> (aka "trust down to the metal"). (Note that Intel >>>>>>>>>>>>>> >>>> and AMD have built CPU/FPGA combined chips, >>>>>>>>>>>>>> >>>> currently only available in the cloud.) >>>>>>>>>>>>>> >>>> >>>>>>>>>>>>>> >>>> >>>>>>>>>>>>>> >>>> >>>>>>>>>>>>>> >>>> I am (slowly) making progress on the research. >>>>>>>>>>>>>> >>>> >>>>>>>>>>>>>> >>>> I have the hardware and nearly have the proof >>>>>>>>>>>>>> >>>> checker from LEAN running on my FPGA. >>>>>>>>>>>>>> >>>> >>>>>>>>>>>>>> >>>> I'm in the process of spreading the DTPs from >>>>>>>>>>>>>> >>>> LEAN across the category/domain hierarchy. >>>>>>>>>>>>>> >>>> >>>>>>>>>>>>>> >>>> The current Axiom build extracts all of the functions >>>>>>>>>>>>>> >>>> but does not yet have the DTPs. >>>>>>>>>>>>>> >>>> >>>>>>>>>>>>>> >>>> I have to restructure the system, including the compile= r >>>>>>>>>>>>>> >>>> and interpreter to parse and inherit the DTPs. I >>>>>>>>>>>>>> >>>> have some of that code but only some of the code >>>>>>>>>>>>>> >>>> has been pushed to the repository (volume 15) but >>>>>>>>>>>>>> >>>> that is rather trivial, out of date, and incomplete. >>>>>>>>>>>>>> >>>> >>>>>>>>>>>>>> >>>> I'm clearly not smart enough to prove the Risch >>>>>>>>>>>>>> >>>> algorithm and its associated machinery but the needed >>>>>>>>>>>>>> >>>> definitions and theorems will be available to someone >>>>>>>>>>>>>> >>>> who wants to try. >>>>>>>>>>>>>> >>>> >>>>>>>>>>>>>> >>>> [0] >>>>>>>>>>>>>> https://github.com/daly/PDFS/blob/master/bookvolbib.pdf >>>>>>>>>>>>>> >>>> >>>>>>>>>>>>>> >>>> >>>>>>>>>>>>>> >>>> On 8/19/21, Tim Daly wrote: >>>>>>>>>>>>>> >>>>> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D >>>>>>>>>>>>>> >>>>> >>>>>>>>>>>>>> >>>>> REVIEW (Axiom on WSL2 Windows) >>>>>>>>>>>>>> >>>>> >>>>>>>>>>>>>> >>>>> >>>>>>>>>>>>>> >>>>> So the steps to run Axiom from a Windows desktop >>>>>>>>>>>>>> >>>>> >>>>>>>>>>>>>> >>>>> 1 Windows) install XMing on Windows for X11 server >>>>>>>>>>>>>> >>>>> >>>>>>>>>>>>>> >>>>> http://www.straightrunning.com/XmingNotes/ >>>>>>>>>>>>>> >>>>> >>>>>>>>>>>>>> >>>>> 2 WSL2) Install Axiom in WSL2 >>>>>>>>>>>>>> >>>>> >>>>>>>>>>>>>> >>>>> sudo apt install axiom >>>>>>>>>>>>>> >>>>> >>>>>>>>>>>>>> >>>>> 3 WSL2) modify /usr/bin/axiom to fix the bug: >>>>>>>>>>>>>> >>>>> (someone changed the axiom startup script. >>>>>>>>>>>>>> >>>>> It won't work on WSL2. I don't know who or >>>>>>>>>>>>>> >>>>> how to get it fixed). >>>>>>>>>>>>>> >>>>> >>>>>>>>>>>>>> >>>>> sudo emacs /usr/bin/axiom >>>>>>>>>>>>>> >>>>> >>>>>>>>>>>>>> >>>>> (split the line into 3 and add quote marks) >>>>>>>>>>>>>> >>>>> >>>>>>>>>>>>>> >>>>> export SPADDEFAULT=3D/usr/local/axiom/mnt/linux >>>>>>>>>>>>>> >>>>> export AXIOM=3D/usr/lib/axiom-20170501 >>>>>>>>>>>>>> >>>>> export "PATH=3D/usr/lib/axiom-20170501/bin:$PATH" >>>>>>>>>>>>>> >>>>> >>>>>>>>>>>>>> >>>>> 4 WSL2) create a .axiom.input file to include startup >>>>>>>>>>>>>> cmds: >>>>>>>>>>>>>> >>>>> >>>>>>>>>>>>>> >>>>> emacs .axiom.input >>>>>>>>>>>>>> >>>>> >>>>>>>>>>>>>> >>>>> )cd "/mnt/c/yourpath" >>>>>>>>>>>>>> >>>>> )sys pwd >>>>>>>>>>>>>> >>>>> >>>>>>>>>>>>>> >>>>> 5 WSL2) create a "myaxiom" command that sets the >>>>>>>>>>>>>> >>>>> DISPLAY variable and starts axiom >>>>>>>>>>>>>> >>>>> >>>>>>>>>>>>>> >>>>> emacs myaxiom >>>>>>>>>>>>>> >>>>> >>>>>>>>>>>>>> >>>>> #! /bin/bash >>>>>>>>>>>>>> >>>>> export DISPLAY=3D:0.0 >>>>>>>>>>>>>> >>>>> axiom >>>>>>>>>>>>>> >>>>> >>>>>>>>>>>>>> >>>>> 6 WSL2) put it in the /usr/bin directory >>>>>>>>>>>>>> >>>>> >>>>>>>>>>>>>> >>>>> chmod +x myaxiom >>>>>>>>>>>>>> >>>>> sudo cp myaxiom /usr/bin/myaxiom >>>>>>>>>>>>>> >>>>> >>>>>>>>>>>>>> >>>>> 7 WINDOWS) start the X11 server >>>>>>>>>>>>>> >>>>> >>>>>>>>>>>>>> >>>>> (XMing XLaunch Icon on your desktop) >>>>>>>>>>>>>> >>>>> >>>>>>>>>>>>>> >>>>> 8 WINDOWS) run myaxiom from PowerShell >>>>>>>>>>>>>> >>>>> (this should start axiom with graphics available) >>>>>>>>>>>>>> >>>>> >>>>>>>>>>>>>> >>>>> wsl myaxiom >>>>>>>>>>>>>> >>>>> >>>>>>>>>>>>>> >>>>> 8 WINDOWS) make a PowerShell desktop >>>>>>>>>>>>>> >>>>> >>>>>>>>>>>>>> >>>>> >>>>>>>>>>>>>> https://superuser.com/questions/886951/run-powershell-script= -when-you-open-powershell >>>>>>>>>>>>>> >>>>> >>>>>>>>>>>>>> >>>>> Tim >>>>>>>>>>>>>> >>>>> >>>>>>>>>>>>>> >>>>> On 8/13/21, Tim Daly wrote: >>>>>>>>>>>>>> >>>>>> A great deal of thought is directed toward making the >>>>>>>>>>>>>> SANE version >>>>>>>>>>>>>> >>>>>> of Axiom as flexible as possible, decoupling mechanis= m >>>>>>>>>>>>>> from theory. >>>>>>>>>>>>>> >>>>>> >>>>>>>>>>>>>> >>>>>> An interesting publication by Brian Cantwell Smith >>>>>>>>>>>>>> [0], "Reflection >>>>>>>>>>>>>> >>>>>> and Semantics in LISP" seems to contain interesting >>>>>>>>>>>>>> ideas related >>>>>>>>>>>>>> >>>>>> to our goal. Of particular interest is the ability to >>>>>>>>>>>>>> reason about >>>>>>>>>>>>>> >>>>>> and >>>>>>>>>>>>>> >>>>>> perform self-referential manipulations. In a >>>>>>>>>>>>>> dependently-typed >>>>>>>>>>>>>> >>>>>> system it seems interesting to be able "adapt" code t= o >>>>>>>>>>>>>> handle >>>>>>>>>>>>>> >>>>>> run-time computed arguments to dependent functions. >>>>>>>>>>>>>> The abstract: >>>>>>>>>>>>>> >>>>>> >>>>>>>>>>>>>> >>>>>> "We show how a computational system can be >>>>>>>>>>>>>> constructed to >>>>>>>>>>>>>> >>>>>> "reason", >>>>>>>>>>>>>> >>>>>> effectively >>>>>>>>>>>>>> >>>>>> and consequentially, about its own inferential >>>>>>>>>>>>>> processes. The >>>>>>>>>>>>>> >>>>>> analysis proceeds in two >>>>>>>>>>>>>> >>>>>> parts. First, we consider the general question of >>>>>>>>>>>>>> computational >>>>>>>>>>>>>> >>>>>> semantics, rejecting >>>>>>>>>>>>>> >>>>>> traditional approaches, and arguing that the >>>>>>>>>>>>>> declarative and >>>>>>>>>>>>>> >>>>>> procedural aspects of >>>>>>>>>>>>>> >>>>>> computational symbols (what they stand for, and >>>>>>>>>>>>>> what behaviour >>>>>>>>>>>>>> >>>>>> they >>>>>>>>>>>>>> >>>>>> engender) should be >>>>>>>>>>>>>> >>>>>> analysed independently, in order that they may be >>>>>>>>>>>>>> coherently >>>>>>>>>>>>>> >>>>>> related. Second, we >>>>>>>>>>>>>> >>>>>> investigate self-referential behavior in >>>>>>>>>>>>>> computational processes, >>>>>>>>>>>>>> >>>>>> and show how to embed an >>>>>>>>>>>>>> >>>>>> effective procedural model of a computational >>>>>>>>>>>>>> calculus within that >>>>>>>>>>>>>> >>>>>> calculus (a model not >>>>>>>>>>>>>> >>>>>> unlike a meta-circular interpreter, but connected >>>>>>>>>>>>>> to the >>>>>>>>>>>>>> >>>>>> fundamental operations of the >>>>>>>>>>>>>> >>>>>> machine in such a way as to provide, at any point >>>>>>>>>>>>>> in a >>>>>>>>>>>>>> >>>>>> computation, >>>>>>>>>>>>>> >>>>>> fully articulated >>>>>>>>>>>>>> >>>>>> descriptions of the state of that computation, for >>>>>>>>>>>>>> inspection and >>>>>>>>>>>>>> >>>>>> possible modification). In >>>>>>>>>>>>>> >>>>>> terms of the theories that result from these >>>>>>>>>>>>>> investigations, we >>>>>>>>>>>>>> >>>>>> present a general architecture >>>>>>>>>>>>>> >>>>>> for procedurally reflective processes, able to >>>>>>>>>>>>>> shift smoothly >>>>>>>>>>>>>> >>>>>> between dealing with a given >>>>>>>>>>>>>> >>>>>> subject domain, and dealing with their own >>>>>>>>>>>>>> reasoning processes >>>>>>>>>>>>>> >>>>>> over >>>>>>>>>>>>>> >>>>>> that domain. >>>>>>>>>>>>>> >>>>>> >>>>>>>>>>>>>> >>>>>> An instance of the general solution is worked out >>>>>>>>>>>>>> in the context >>>>>>>>>>>>>> >>>>>> of >>>>>>>>>>>>>> >>>>>> an applicative >>>>>>>>>>>>>> >>>>>> language. Specifically, we present three successiv= e >>>>>>>>>>>>>> dialects of >>>>>>>>>>>>>> >>>>>> LISP: 1-LISP, a distillation of >>>>>>>>>>>>>> >>>>>> current practice, for comparison purposes; 2-LISP, >>>>>>>>>>>>>> a dialect >>>>>>>>>>>>>> >>>>>> constructed in terms of our >>>>>>>>>>>>>> >>>>>> rationalised semantics, in which the concept of >>>>>>>>>>>>>> evaluation is >>>>>>>>>>>>>> >>>>>> rejected in favour of >>>>>>>>>>>>>> >>>>>> independent notions of simplification and >>>>>>>>>>>>>> reference, and in which >>>>>>>>>>>>>> >>>>>> the respective categories >>>>>>>>>>>>>> >>>>>> of notation, structure, semantics, and behaviour >>>>>>>>>>>>>> are strictly >>>>>>>>>>>>>> >>>>>> aligned; and 3-LISP, an >>>>>>>>>>>>>> >>>>>> extension of 2-LISP endowed with reflective powers= ." >>>>>>>>>>>>>> >>>>>> >>>>>>>>>>>>>> >>>>>> Axiom SANE builds dependent types on the fly. The >>>>>>>>>>>>>> ability to access >>>>>>>>>>>>>> >>>>>> both the refection >>>>>>>>>>>>>> >>>>>> of the tower of algebra and the reflection of the >>>>>>>>>>>>>> tower of proofs at >>>>>>>>>>>>>> >>>>>> the time of construction >>>>>>>>>>>>>> >>>>>> makes the construction of a new domain or specific >>>>>>>>>>>>>> algorithm easier >>>>>>>>>>>>>> >>>>>> and more general. >>>>>>>>>>>>>> >>>>>> >>>>>>>>>>>>>> >>>>>> This is of particular interest because one of the >>>>>>>>>>>>>> efforts is to build >>>>>>>>>>>>>> >>>>>> "all the way down to the >>>>>>>>>>>>>> >>>>>> metal". If each layer is constructed on top of >>>>>>>>>>>>>> previous proven layers >>>>>>>>>>>>>> >>>>>> and the new layer >>>>>>>>>>>>>> >>>>>> can "reach below" to lower layers then the tower of >>>>>>>>>>>>>> layers can be >>>>>>>>>>>>>> >>>>>> built without duplication. >>>>>>>>>>>>>> >>>>>> >>>>>>>>>>>>>> >>>>>> Tim >>>>>>>>>>>>>> >>>>>> >>>>>>>>>>>>>> >>>>>> [0], Smith, Brian Cantwell "Reflection and Semantics >>>>>>>>>>>>>> in LISP" >>>>>>>>>>>>>> >>>>>> POPL '84: Proceedings of the 11th ACM SIGACT-SIGPLAN >>>>>>>>>>>>>> >>>>>> ymposium on Principles of programming languagesJanuar= y >>>>>>>>>>>>>> 1 >>>>>>>>>>>>>> >>>>>> 984 Pages 23=E2=80=9335https://doi.org/10.1145/800017= .800513 >>>>>>>>>>>>>> >>>>>> >>>>>>>>>>>>>> >>>>>> On 6/29/21, Tim Daly wrote: >>>>>>>>>>>>>> >>>>>>> Having spent time playing with hardware it is >>>>>>>>>>>>>> perfectly clear that >>>>>>>>>>>>>> >>>>>>> future computational mathematics efforts need to >>>>>>>>>>>>>> adapt to using >>>>>>>>>>>>>> >>>>>>> parallel processing. >>>>>>>>>>>>>> >>>>>>> >>>>>>>>>>>>>> >>>>>>> I've spent a fair bit of time thinking about >>>>>>>>>>>>>> structuring Axiom to >>>>>>>>>>>>>> >>>>>>> be parallel. Most past efforts have tried to focus o= n >>>>>>>>>>>>>> making a >>>>>>>>>>>>>> >>>>>>> particular algorithm parallel, such as a matrix >>>>>>>>>>>>>> multiply. >>>>>>>>>>>>>> >>>>>>> >>>>>>>>>>>>>> >>>>>>> But I think that it might be more effective to make >>>>>>>>>>>>>> each domain >>>>>>>>>>>>>> >>>>>>> run in parallel. A computation crosses multiple >>>>>>>>>>>>>> domains so a >>>>>>>>>>>>>> >>>>>>> particular computation could involve multiple >>>>>>>>>>>>>> parallel copies. >>>>>>>>>>>>>> >>>>>>> >>>>>>>>>>>>>> >>>>>>> For example, computing the Cylindrical Algebraic >>>>>>>>>>>>>> Decomposition >>>>>>>>>>>>>> >>>>>>> could recursively decompose the plane. Indeed, any >>>>>>>>>>>>>> tree-recursive >>>>>>>>>>>>>> >>>>>>> algorithm could be run in parallel "in the large" by >>>>>>>>>>>>>> creating new >>>>>>>>>>>>>> >>>>>>> running copies of the domain for each sub-problem. >>>>>>>>>>>>>> >>>>>>> >>>>>>>>>>>>>> >>>>>>> So the question becomes, how does one manage this? >>>>>>>>>>>>>> >>>>>>> >>>>>>>>>>>>>> >>>>>>> A similar problem occurs in robotics where one could >>>>>>>>>>>>>> have multiple >>>>>>>>>>>>>> >>>>>>> wheels, arms, propellers, etc. that need to act >>>>>>>>>>>>>> independently but >>>>>>>>>>>>>> >>>>>>> in coordination. >>>>>>>>>>>>>> >>>>>>> >>>>>>>>>>>>>> >>>>>>> The robot solution uses ROS2. The three ideas are >>>>>>>>>>>>>> ROSCORE, >>>>>>>>>>>>>> >>>>>>> TOPICS with publish/subscribe, and SERVICES with >>>>>>>>>>>>>> request/response. >>>>>>>>>>>>>> >>>>>>> These are communication paths defined between >>>>>>>>>>>>>> processes. >>>>>>>>>>>>>> >>>>>>> >>>>>>>>>>>>>> >>>>>>> ROS2 has a "roscore" which is basically a phonebook >>>>>>>>>>>>>> of "topics". >>>>>>>>>>>>>> >>>>>>> Any process can create or look up the current active >>>>>>>>>>>>>> topics. eq: >>>>>>>>>>>>>> >>>>>>> >>>>>>>>>>>>>> >>>>>>> rosnode list >>>>>>>>>>>>>> >>>>>>> >>>>>>>>>>>>>> >>>>>>> TOPICS: >>>>>>>>>>>>>> >>>>>>> >>>>>>>>>>>>>> >>>>>>> Any process can PUBLISH a topic (which is basically = a >>>>>>>>>>>>>> typed data >>>>>>>>>>>>>> >>>>>>> structure), e.g the topic /hw with the String data >>>>>>>>>>>>>> "Hello World". >>>>>>>>>>>>>> >>>>>>> eg: >>>>>>>>>>>>>> >>>>>>> >>>>>>>>>>>>>> >>>>>>> rostopic pub /hw std_msgs/String "Hello, World" >>>>>>>>>>>>>> >>>>>>> >>>>>>>>>>>>>> >>>>>>> Any process can SUBSCRIBE to a topic, such as /hw, >>>>>>>>>>>>>> and get a >>>>>>>>>>>>>> >>>>>>> copy of the data. eg: >>>>>>>>>>>>>> >>>>>>> >>>>>>>>>>>>>> >>>>>>> rostopic echo /hw =3D=3D> "Hello, World" >>>>>>>>>>>>>> >>>>>>> >>>>>>>>>>>>>> >>>>>>> Publishers talk, subscribers listen. >>>>>>>>>>>>>> >>>>>>> >>>>>>>>>>>>>> >>>>>>> >>>>>>>>>>>>>> >>>>>>> SERVICES: >>>>>>>>>>>>>> >>>>>>> >>>>>>>>>>>>>> >>>>>>> Any process can make a REQUEST of a SERVICE and get = a >>>>>>>>>>>>>> RESPONSE. >>>>>>>>>>>>>> >>>>>>> This is basically a remote function call. >>>>>>>>>>>>>> >>>>>>> >>>>>>>>>>>>>> >>>>>>> >>>>>>>>>>>>>> >>>>>>> >>>>>>>>>>>>>> >>>>>>> Axiom in parallel? >>>>>>>>>>>>>> >>>>>>> >>>>>>>>>>>>>> >>>>>>> So domains could run, each in its own process. It >>>>>>>>>>>>>> could provide >>>>>>>>>>>>>> >>>>>>> services, one for each function. Any other process >>>>>>>>>>>>>> could request >>>>>>>>>>>>>> >>>>>>> a computation and get the result as a response. >>>>>>>>>>>>>> Domains could >>>>>>>>>>>>>> >>>>>>> request services from other domains, either waiting >>>>>>>>>>>>>> for responses >>>>>>>>>>>>>> >>>>>>> or continuing while the response is being computed. >>>>>>>>>>>>>> >>>>>>> >>>>>>>>>>>>>> >>>>>>> The output could be sent anywhere, to a terminal, to >>>>>>>>>>>>>> a browser, >>>>>>>>>>>>>> >>>>>>> to a network, or to another process using the >>>>>>>>>>>>>> publish/subscribe >>>>>>>>>>>>>> >>>>>>> protocol, potentially all at the same time since >>>>>>>>>>>>>> there can be many >>>>>>>>>>>>>> >>>>>>> subscribers to a topic. >>>>>>>>>>>>>> >>>>>>> >>>>>>>>>>>>>> >>>>>>> Available domains could be dynamically added by >>>>>>>>>>>>>> announcing >>>>>>>>>>>>>> >>>>>>> themselves as new "topics" and could be dynamically >>>>>>>>>>>>>> looked-up >>>>>>>>>>>>>> >>>>>>> at runtime. >>>>>>>>>>>>>> >>>>>>> >>>>>>>>>>>>>> >>>>>>> This structure allows function-level / domain-level >>>>>>>>>>>>>> parallelism. >>>>>>>>>>>>>> >>>>>>> It is very effective in the robot world and I think >>>>>>>>>>>>>> it might be a >>>>>>>>>>>>>> >>>>>>> good structuring mechanism to allow computational >>>>>>>>>>>>>> mathematics >>>>>>>>>>>>>> >>>>>>> to take advantage of multiple processors in a >>>>>>>>>>>>>> disciplined fashion. >>>>>>>>>>>>>> >>>>>>> >>>>>>>>>>>>>> >>>>>>> Axiom has a thousand domains and each could run on >>>>>>>>>>>>>> its own core. >>>>>>>>>>>>>> >>>>>>> >>>>>>>>>>>>>> >>>>>>> In addition. notice that each domain is independent >>>>>>>>>>>>>> of the others. >>>>>>>>>>>>>> >>>>>>> So if we want to use BLAS Fortran code, it could jus= t >>>>>>>>>>>>>> be another >>>>>>>>>>>>>> >>>>>>> service node. In fact, any "foreign function" could >>>>>>>>>>>>>> transparently >>>>>>>>>>>>>> >>>>>>> cooperate in a distributed Axiom. >>>>>>>>>>>>>> >>>>>>> >>>>>>>>>>>>>> >>>>>>> Another key feature is that proofs can be "by node". >>>>>>>>>>>>>> >>>>>>> >>>>>>>>>>>>>> >>>>>>> Tim >>>>>>>>>>>>>> >>>>>>> >>>>>>>>>>>>>> >>>>>>> >>>>>>>>>>>>>> >>>>>>> >>>>>>>>>>>>>> >>>>>>> >>>>>>>>>>>>>> >>>>>>> On 6/5/21, Tim Daly wrote: >>>>>>>>>>>>>> >>>>>>>> Axiom is based on first-class dependent types. >>>>>>>>>>>>>> Deciding when >>>>>>>>>>>>>> >>>>>>>> two types are equivalent may involve computation. S= ee >>>>>>>>>>>>>> >>>>>>>> Christiansen, David Thrane "Checking Dependent Type= s >>>>>>>>>>>>>> with >>>>>>>>>>>>>> >>>>>>>> Normalization by Evaluation" (2019) >>>>>>>>>>>>>> >>>>>>>> >>>>>>>>>>>>>> >>>>>>>> This puts an interesting constraint on building >>>>>>>>>>>>>> types. The >>>>>>>>>>>>>> >>>>>>>> constructed types has to export a function to decid= e >>>>>>>>>>>>>> if a >>>>>>>>>>>>>> >>>>>>>> given type is "equivalent" to itself. >>>>>>>>>>>>>> >>>>>>>> >>>>>>>>>>>>>> >>>>>>>> The notion of "equivalence" might involve category >>>>>>>>>>>>>> ideas >>>>>>>>>>>>>> >>>>>>>> of natural transformation and univalence. Sigh. >>>>>>>>>>>>>> >>>>>>>> >>>>>>>>>>>>>> >>>>>>>> That's an interesting design point. >>>>>>>>>>>>>> >>>>>>>> >>>>>>>>>>>>>> >>>>>>>> Tim >>>>>>>>>>>>>> >>>>>>>> >>>>>>>>>>>>>> >>>>>>>> >>>>>>>>>>>>>> >>>>>>>> On 5/5/21, Tim Daly wrote: >>>>>>>>>>>>>> >>>>>>>>> It is interesting that programmer's eyes and >>>>>>>>>>>>>> expectations adapt >>>>>>>>>>>>>> >>>>>>>>> to the tools they use. For instance, I use emacs >>>>>>>>>>>>>> and expect to >>>>>>>>>>>>>> >>>>>>>>> work directly in files and multiple buffers. When = I >>>>>>>>>>>>>> try to use one >>>>>>>>>>>>>> >>>>>>>>> of the many IDE tools I find they tend to "get in >>>>>>>>>>>>>> the way". I >>>>>>>>>>>>>> >>>>>>>>> already >>>>>>>>>>>>>> >>>>>>>>> know or can quickly find whatever they try to tell >>>>>>>>>>>>>> me. If you use >>>>>>>>>>>>>> >>>>>>>>> an >>>>>>>>>>>>>> >>>>>>>>> IDE you probably find emacs "too sparse" for >>>>>>>>>>>>>> programming. >>>>>>>>>>>>>> >>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>> Recently I've been working in a sparse programming >>>>>>>>>>>>>> environment. >>>>>>>>>>>>>> >>>>>>>>> I'm exploring the question of running a proof >>>>>>>>>>>>>> checker in an FPGA. >>>>>>>>>>>>>> >>>>>>>>> The FPGA development tools are painful at best and >>>>>>>>>>>>>> not intuitive >>>>>>>>>>>>>> >>>>>>>>> since you SEEM to be programming but you're >>>>>>>>>>>>>> actually describing >>>>>>>>>>>>>> >>>>>>>>> hardware gates, connections, and timing. This is a= n >>>>>>>>>>>>>> environment >>>>>>>>>>>>>> >>>>>>>>> where everything happens all-at-once and >>>>>>>>>>>>>> all-the-time (like the >>>>>>>>>>>>>> >>>>>>>>> circuits in your computer). It is the "assembly >>>>>>>>>>>>>> language of >>>>>>>>>>>>>> >>>>>>>>> circuits". >>>>>>>>>>>>>> >>>>>>>>> Naturally, my eyes have adapted to this rather raw >>>>>>>>>>>>>> level. >>>>>>>>>>>>>> >>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>> That said, I'm normally doing literate programming >>>>>>>>>>>>>> all the time. >>>>>>>>>>>>>> >>>>>>>>> My typical file is a document which is a mixture o= f >>>>>>>>>>>>>> latex and >>>>>>>>>>>>>> >>>>>>>>> lisp. >>>>>>>>>>>>>> >>>>>>>>> It is something of a shock to return to that world= . >>>>>>>>>>>>>> It is clear >>>>>>>>>>>>>> >>>>>>>>> why >>>>>>>>>>>>>> >>>>>>>>> people who program in Python find lisp to be a "se= a >>>>>>>>>>>>>> of parens". >>>>>>>>>>>>>> >>>>>>>>> Yet as a lisp programmer, I don't even see the >>>>>>>>>>>>>> parens, just code. >>>>>>>>>>>>>> >>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>> It takes a few minutes in a literate document to >>>>>>>>>>>>>> adapt vision to >>>>>>>>>>>>>> >>>>>>>>> see the latex / lisp combination as natural. The >>>>>>>>>>>>>> latex markup, >>>>>>>>>>>>>> >>>>>>>>> like the lisp parens, eventually just disappears. >>>>>>>>>>>>>> What remains >>>>>>>>>>>>>> >>>>>>>>> is just lisp and natural language text. >>>>>>>>>>>>>> >>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>> This seems painful at first but eyes quickly adapt= . >>>>>>>>>>>>>> The upside >>>>>>>>>>>>>> >>>>>>>>> is that there is always a "finished" document that >>>>>>>>>>>>>> describes the >>>>>>>>>>>>>> >>>>>>>>> state of the code. The overhead of writing a >>>>>>>>>>>>>> paragraph to >>>>>>>>>>>>>> >>>>>>>>> describe a new function or change a paragraph to >>>>>>>>>>>>>> describe the >>>>>>>>>>>>>> >>>>>>>>> changed function is very small. >>>>>>>>>>>>>> >>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>> Using a Makefile I latex the document to generate = a >>>>>>>>>>>>>> current PDF >>>>>>>>>>>>>> >>>>>>>>> and then I extract, load, and execute the code. >>>>>>>>>>>>>> This loop catches >>>>>>>>>>>>>> >>>>>>>>> errors in both the latex and the source code. >>>>>>>>>>>>>> Keeping an open file >>>>>>>>>>>>>> >>>>>>>>> in >>>>>>>>>>>>>> >>>>>>>>> my pdf viewer shows all of the changes in the >>>>>>>>>>>>>> document after every >>>>>>>>>>>>>> >>>>>>>>> run of make. That way I can edit the book as easil= y >>>>>>>>>>>>>> as the code. >>>>>>>>>>>>>> >>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>> Ultimately I find that writing the book while >>>>>>>>>>>>>> writing the code is >>>>>>>>>>>>>> >>>>>>>>> more productive. I don't have to remember why I >>>>>>>>>>>>>> wrote something >>>>>>>>>>>>>> >>>>>>>>> since the explanation is already there. >>>>>>>>>>>>>> >>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>> We all have our own way of programming and our own >>>>>>>>>>>>>> tools. >>>>>>>>>>>>>> >>>>>>>>> But I find literate programming to be a real >>>>>>>>>>>>>> advance over IDE >>>>>>>>>>>>>> >>>>>>>>> style programming and "raw code" programming. >>>>>>>>>>>>>> >>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>> Tim >>>>>>>>>>>>>> >>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>> On 2/27/21, Tim Daly wrote: >>>>>>>>>>>>>> >>>>>>>>>> The systems I use have the interesting property o= f >>>>>>>>>>>>>> >>>>>>>>>> "Living within the compiler". >>>>>>>>>>>>>> >>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>> Lisp, Forth, Emacs, and other systems that presen= t >>>>>>>>>>>>>> themselves >>>>>>>>>>>>>> >>>>>>>>>> through the Read-Eval-Print-Loop (REPL) allow the >>>>>>>>>>>>>> >>>>>>>>>> ability to deeply interact with the system, >>>>>>>>>>>>>> shaping it to your >>>>>>>>>>>>>> >>>>>>>>>> need. >>>>>>>>>>>>>> >>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>> My current thread of study is software >>>>>>>>>>>>>> architecture. See >>>>>>>>>>>>>> >>>>>>>>>> >>>>>>>>>>>>>> https://www.youtube.com/watch?v=3DW2hagw1VhhI&feature=3Dyout= u.be >>>>>>>>>>>>>> >>>>>>>>>> and https://www.georgefairbanks.com/videos/ >>>>>>>>>>>>>> >>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>> My current thinking on SANE involves the ability = to >>>>>>>>>>>>>> >>>>>>>>>> dynamically define categories, representations, >>>>>>>>>>>>>> and functions >>>>>>>>>>>>>> >>>>>>>>>> along with "composition functions" that permits >>>>>>>>>>>>>> choosing a >>>>>>>>>>>>>> >>>>>>>>>> combination at the time of use. >>>>>>>>>>>>>> >>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>> You might want a domain for handling polynomials. >>>>>>>>>>>>>> There are >>>>>>>>>>>>>> >>>>>>>>>> a lot of choices, depending on your use case. You >>>>>>>>>>>>>> might want >>>>>>>>>>>>>> >>>>>>>>>> different representations. For example, you might >>>>>>>>>>>>>> want dense, >>>>>>>>>>>>>> >>>>>>>>>> sparse, recursive, or "machine compatible fixnums= " >>>>>>>>>>>>>> (e.g. to >>>>>>>>>>>>>> >>>>>>>>>> interface with C code). If these don't exist it >>>>>>>>>>>>>> ought to be >>>>>>>>>>>>>> >>>>>>>>>> possible >>>>>>>>>>>>>> >>>>>>>>>> to create them. Such "lego-like" building blocks >>>>>>>>>>>>>> require careful >>>>>>>>>>>>>> >>>>>>>>>> thought about creating "fully factored" objects. >>>>>>>>>>>>>> >>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>> Given that goal, the traditional barrier of >>>>>>>>>>>>>> "compiler" vs >>>>>>>>>>>>>> >>>>>>>>>> "interpreter" >>>>>>>>>>>>>> >>>>>>>>>> does not seem useful. It is better to "live withi= n >>>>>>>>>>>>>> the compiler" >>>>>>>>>>>>>> >>>>>>>>>> which >>>>>>>>>>>>>> >>>>>>>>>> gives the ability to define new things "on the >>>>>>>>>>>>>> fly". >>>>>>>>>>>>>> >>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>> Of course, the SANE compiler is going to want an >>>>>>>>>>>>>> associated >>>>>>>>>>>>>> >>>>>>>>>> proof of the functions you create along with the >>>>>>>>>>>>>> other parts >>>>>>>>>>>>>> >>>>>>>>>> such as its category hierarchy and representation >>>>>>>>>>>>>> properties. >>>>>>>>>>>>>> >>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>> There is no such thing as a simple job. :-) >>>>>>>>>>>>>> >>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>> Tim >>>>>>>>>>>>>> >>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>> On 2/18/21, Tim Daly wrote: >>>>>>>>>>>>>> >>>>>>>>>>> The Axiom SANE compiler / interpreter has a few >>>>>>>>>>>>>> design points. >>>>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>> 1) It needs to mix interpreted and compiled code >>>>>>>>>>>>>> in the same >>>>>>>>>>>>>> >>>>>>>>>>> function. >>>>>>>>>>>>>> >>>>>>>>>>> SANE allows dynamic construction of code as well >>>>>>>>>>>>>> as dynamic type >>>>>>>>>>>>>> >>>>>>>>>>> construction at runtime. Both of these can occur >>>>>>>>>>>>>> in a runtime >>>>>>>>>>>>>> >>>>>>>>>>> object. >>>>>>>>>>>>>> >>>>>>>>>>> So there is potentially a mixture of interpreted >>>>>>>>>>>>>> and compiled >>>>>>>>>>>>>> >>>>>>>>>>> code. >>>>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>> 2) It needs to perform type resolution at compil= e >>>>>>>>>>>>>> time without >>>>>>>>>>>>>> >>>>>>>>>>> overhead >>>>>>>>>>>>>> >>>>>>>>>>> where possible. Since this is not always possibl= e >>>>>>>>>>>>>> there needs to >>>>>>>>>>>>>> >>>>>>>>>>> be >>>>>>>>>>>>>> >>>>>>>>>>> a "prefix thunk" that will perform the >>>>>>>>>>>>>> resolution. Trivially, >>>>>>>>>>>>>> >>>>>>>>>>> for >>>>>>>>>>>>>> >>>>>>>>>>> example, >>>>>>>>>>>>>> >>>>>>>>>>> if we have a + function we need to type-resolve >>>>>>>>>>>>>> the arguments. >>>>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>> However, if we can prove at compile time that th= e >>>>>>>>>>>>>> types are both >>>>>>>>>>>>>> >>>>>>>>>>> bounded-NNI and the result is bounded-NNI (i.e. >>>>>>>>>>>>>> fixnum in lisp) >>>>>>>>>>>>>> >>>>>>>>>>> then we can inline a call to + at runtime. If >>>>>>>>>>>>>> not, we might have >>>>>>>>>>>>>> >>>>>>>>>>> + applied to NNI and POLY(FLOAT), which requires >>>>>>>>>>>>>> a thunk to >>>>>>>>>>>>>> >>>>>>>>>>> resolve types. The thunk could even "specialize >>>>>>>>>>>>>> and compile" >>>>>>>>>>>>>> >>>>>>>>>>> the code before executing it. >>>>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>> It turns out that the Forth implementation of >>>>>>>>>>>>>> >>>>>>>>>>> "threaded-interpreted" >>>>>>>>>>>>>> >>>>>>>>>>> languages model provides an efficient and >>>>>>>>>>>>>> effective way to do >>>>>>>>>>>>>> >>>>>>>>>>> this.[0] >>>>>>>>>>>>>> >>>>>>>>>>> Type resolution can be "inserted" in intermediat= e >>>>>>>>>>>>>> thunks. >>>>>>>>>>>>>> >>>>>>>>>>> The model also supports dynamic overloading and >>>>>>>>>>>>>> tail recursion. >>>>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>> Combining high-level CLOS code with low-level >>>>>>>>>>>>>> threading gives an >>>>>>>>>>>>>> >>>>>>>>>>> easy to understand and robust design. >>>>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>> Tim >>>>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>> [0] Loeliger, R.G. "Threaded Interpretive >>>>>>>>>>>>>> Languages" (1981) >>>>>>>>>>>>>> >>>>>>>>>>> ISBN 0-07-038360-X >>>>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>> On 2/5/21, Tim Daly wrote: >>>>>>>>>>>>>> >>>>>>>>>>>> I've worked hard to make Axiom depend on almost >>>>>>>>>>>>>> no other >>>>>>>>>>>>>> >>>>>>>>>>>> tools so that it would not get caught by "code >>>>>>>>>>>>>> rot" of >>>>>>>>>>>>>> >>>>>>>>>>>> libraries. >>>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>> However, I'm also trying to make the new SANE >>>>>>>>>>>>>> version much >>>>>>>>>>>>>> >>>>>>>>>>>> easier to understand and debug.To that end I've >>>>>>>>>>>>>> been >>>>>>>>>>>>>> >>>>>>>>>>>> experimenting >>>>>>>>>>>>>> >>>>>>>>>>>> with some ideas. >>>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>> It should be possible to view source code, of >>>>>>>>>>>>>> course. But the >>>>>>>>>>>>>> >>>>>>>>>>>> source >>>>>>>>>>>>>> >>>>>>>>>>>> code is not the only, nor possibly the best, >>>>>>>>>>>>>> representation of >>>>>>>>>>>>>> >>>>>>>>>>>> the >>>>>>>>>>>>>> >>>>>>>>>>>> ideas. >>>>>>>>>>>>>> >>>>>>>>>>>> In particular, source code gets compiled into >>>>>>>>>>>>>> data structures. >>>>>>>>>>>>>> >>>>>>>>>>>> In >>>>>>>>>>>>>> >>>>>>>>>>>> Axiom >>>>>>>>>>>>>> >>>>>>>>>>>> these data structures really are a graph of >>>>>>>>>>>>>> related structures. >>>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>> For example, looking at the gcd function from >>>>>>>>>>>>>> NNI, there is the >>>>>>>>>>>>>> >>>>>>>>>>>> representation of the gcd function itself. But >>>>>>>>>>>>>> there is also a >>>>>>>>>>>>>> >>>>>>>>>>>> structure >>>>>>>>>>>>>> >>>>>>>>>>>> that is the REP (and, in the new system, is >>>>>>>>>>>>>> separate from the >>>>>>>>>>>>>> >>>>>>>>>>>> domain). >>>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>> Further, there are associated specification and >>>>>>>>>>>>>> proof >>>>>>>>>>>>>> >>>>>>>>>>>> structures. >>>>>>>>>>>>>> >>>>>>>>>>>> Even >>>>>>>>>>>>>> >>>>>>>>>>>> further, the domain inherits the category >>>>>>>>>>>>>> structures, and from >>>>>>>>>>>>>> >>>>>>>>>>>> those >>>>>>>>>>>>>> >>>>>>>>>>>> it >>>>>>>>>>>>>> >>>>>>>>>>>> inherits logical axioms and definitions through >>>>>>>>>>>>>> the proof >>>>>>>>>>>>>> >>>>>>>>>>>> structure. >>>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>> Clearly the gcd function is a node in a much >>>>>>>>>>>>>> larger graph >>>>>>>>>>>>>> >>>>>>>>>>>> structure. >>>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>> When trying to decide why code won't compile it >>>>>>>>>>>>>> would be useful >>>>>>>>>>>>>> >>>>>>>>>>>> to >>>>>>>>>>>>>> >>>>>>>>>>>> be able to see and walk these structures. I've >>>>>>>>>>>>>> thought about >>>>>>>>>>>>>> >>>>>>>>>>>> using >>>>>>>>>>>>>> >>>>>>>>>>>> the >>>>>>>>>>>>>> >>>>>>>>>>>> browser but browsers are too weak. Either >>>>>>>>>>>>>> everything has to be >>>>>>>>>>>>>> >>>>>>>>>>>> "in >>>>>>>>>>>>>> >>>>>>>>>>>> a >>>>>>>>>>>>>> >>>>>>>>>>>> single tab to show the graph" or "the nodes of >>>>>>>>>>>>>> the graph are in >>>>>>>>>>>>>> >>>>>>>>>>>> different >>>>>>>>>>>>>> >>>>>>>>>>>> tabs". Plus, constructing dynamic graphs that >>>>>>>>>>>>>> change as the >>>>>>>>>>>>>> >>>>>>>>>>>> software >>>>>>>>>>>>>> >>>>>>>>>>>> changes (e.g. by loading a new spad file or >>>>>>>>>>>>>> creating a new >>>>>>>>>>>>>> >>>>>>>>>>>> function) >>>>>>>>>>>>>> >>>>>>>>>>>> represents the huge problem of keeping the >>>>>>>>>>>>>> browser "in sync >>>>>>>>>>>>>> >>>>>>>>>>>> with >>>>>>>>>>>>>> >>>>>>>>>>>> the >>>>>>>>>>>>>> >>>>>>>>>>>> Axiom workspace". So something more dynamic and >>>>>>>>>>>>>> embedded is >>>>>>>>>>>>>> >>>>>>>>>>>> needed. >>>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>> Axiom source gets compiled into CLOS data >>>>>>>>>>>>>> structures. Each of >>>>>>>>>>>>>> >>>>>>>>>>>> these >>>>>>>>>>>>>> >>>>>>>>>>>> new SANE structures has an associated surface >>>>>>>>>>>>>> representation, >>>>>>>>>>>>>> >>>>>>>>>>>> so >>>>>>>>>>>>>> >>>>>>>>>>>> they >>>>>>>>>>>>>> >>>>>>>>>>>> can be presented in user-friendly form. >>>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>> Also, since Axiom is literate software, it >>>>>>>>>>>>>> should be possible >>>>>>>>>>>>>> >>>>>>>>>>>> to >>>>>>>>>>>>>> >>>>>>>>>>>> look >>>>>>>>>>>>>> >>>>>>>>>>>> at >>>>>>>>>>>>>> >>>>>>>>>>>> the code in its literate form with the >>>>>>>>>>>>>> surrounding explanation. >>>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>> Essentially we'd like to have the ability to >>>>>>>>>>>>>> "deep dive" into >>>>>>>>>>>>>> >>>>>>>>>>>> the >>>>>>>>>>>>>> >>>>>>>>>>>> Axiom >>>>>>>>>>>>>> >>>>>>>>>>>> workspace, not only for debugging, but also for >>>>>>>>>>>>>> understanding >>>>>>>>>>>>>> >>>>>>>>>>>> what >>>>>>>>>>>>>> >>>>>>>>>>>> functions are used, where they come from, what >>>>>>>>>>>>>> they inherit, >>>>>>>>>>>>>> >>>>>>>>>>>> and >>>>>>>>>>>>>> >>>>>>>>>>>> how they are used in a computation. >>>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>> To that end I'm looking at using McClim, a lisp >>>>>>>>>>>>>> windowing >>>>>>>>>>>>>> >>>>>>>>>>>> system. >>>>>>>>>>>>>> >>>>>>>>>>>> Since the McClim windows would be part of the >>>>>>>>>>>>>> lisp image, they >>>>>>>>>>>>>> >>>>>>>>>>>> have >>>>>>>>>>>>>> >>>>>>>>>>>> access to display (and modify) the Axiom >>>>>>>>>>>>>> workspace at all >>>>>>>>>>>>>> >>>>>>>>>>>> times. >>>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>> The only hesitation is that McClim uses >>>>>>>>>>>>>> quicklisp and drags in >>>>>>>>>>>>>> >>>>>>>>>>>> a >>>>>>>>>>>>>> >>>>>>>>>>>> lot >>>>>>>>>>>>>> >>>>>>>>>>>> of other subsystems. It's all lisp, of course. >>>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>> These ideas aren't new. They were available on >>>>>>>>>>>>>> Symbolics >>>>>>>>>>>>>> >>>>>>>>>>>> machines, >>>>>>>>>>>>>> >>>>>>>>>>>> a truly productive platform and one I sorely >>>>>>>>>>>>>> miss. >>>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>> Tim >>>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>> On 1/19/21, Tim Daly wrote= : >>>>>>>>>>>>>> >>>>>>>>>>>>> Also of interest is the talk >>>>>>>>>>>>>> >>>>>>>>>>>>> "The Unreasonable Effectiveness of Dynamic >>>>>>>>>>>>>> Typing for >>>>>>>>>>>>>> >>>>>>>>>>>>> Practical >>>>>>>>>>>>>> >>>>>>>>>>>>> Programs" >>>>>>>>>>>>>> >>>>>>>>>>>>> https://vimeo.com/74354480 >>>>>>>>>>>>>> >>>>>>>>>>>>> which questions whether static typing really >>>>>>>>>>>>>> has any benefit. >>>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>> Tim >>>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>> On 1/19/21, Tim Daly >>>>>>>>>>>>>> wrote: >>>>>>>>>>>>>> >>>>>>>>>>>>>> Peter Naur wrote an article of interest: >>>>>>>>>>>>>> >>>>>>>>>>>>>> http://pages.cs.wisc.edu/~remzi/Naur.pdf >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> In particular, it mirrors my notion that Axio= m >>>>>>>>>>>>>> needs >>>>>>>>>>>>>> >>>>>>>>>>>>>> to embrace literate programming so that the >>>>>>>>>>>>>> "theory >>>>>>>>>>>>>> >>>>>>>>>>>>>> of the problem" is presented as well as the >>>>>>>>>>>>>> "theory >>>>>>>>>>>>>> >>>>>>>>>>>>>> of the solution". I quote the introduction: >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> This article is, to my mind, the most accurat= e >>>>>>>>>>>>>> account >>>>>>>>>>>>>> >>>>>>>>>>>>>> of what goes on in designing and coding a >>>>>>>>>>>>>> program. >>>>>>>>>>>>>> >>>>>>>>>>>>>> I refer to it regularly when discussing how >>>>>>>>>>>>>> much >>>>>>>>>>>>>> >>>>>>>>>>>>>> documentation to create, how to pass along >>>>>>>>>>>>>> tacit >>>>>>>>>>>>>> >>>>>>>>>>>>>> knowledge, and the value of the XP's >>>>>>>>>>>>>> metaphor-setting >>>>>>>>>>>>>> >>>>>>>>>>>>>> exercise. It also provides a way to examine a >>>>>>>>>>>>>> methodolgy's >>>>>>>>>>>>>> >>>>>>>>>>>>>> economic structure. >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> In the article, which follows, note that the >>>>>>>>>>>>>> quality of the >>>>>>>>>>>>>> >>>>>>>>>>>>>> designing programmer's work is related to the >>>>>>>>>>>>>> quality of >>>>>>>>>>>>>> >>>>>>>>>>>>>> the match between his theory of the problem >>>>>>>>>>>>>> and his theory >>>>>>>>>>>>>> >>>>>>>>>>>>>> of the solution. Note that the quality of a >>>>>>>>>>>>>> later >>>>>>>>>>>>>> >>>>>>>>>>>>>> programmer's >>>>>>>>>>>>>> >>>>>>>>>>>>>> work is related to the match between his >>>>>>>>>>>>>> theories and the >>>>>>>>>>>>>> >>>>>>>>>>>>>> previous programmer's theories. >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> Using Naur's ideas, the designer's job is not >>>>>>>>>>>>>> to pass along >>>>>>>>>>>>>> >>>>>>>>>>>>>> "the design" but to pass along "the theories" >>>>>>>>>>>>>> driving the >>>>>>>>>>>>>> >>>>>>>>>>>>>> design. >>>>>>>>>>>>>> >>>>>>>>>>>>>> The latter goal is more useful and more >>>>>>>>>>>>>> appropriate. It also >>>>>>>>>>>>>> >>>>>>>>>>>>>> highlights that knowledge of the theory is >>>>>>>>>>>>>> tacit in the >>>>>>>>>>>>>> >>>>>>>>>>>>>> owning, >>>>>>>>>>>>>> >>>>>>>>>>>>>> and >>>>>>>>>>>>>> >>>>>>>>>>>>>> so passing along the thoery requires passing >>>>>>>>>>>>>> along both >>>>>>>>>>>>>> >>>>>>>>>>>>>> explicit >>>>>>>>>>>>>> >>>>>>>>>>>>>> and tacit knowledge. >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> Tim >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>> >>>>>>>>>>>>>> >>>>>>>> >>>>>>>>>>>>>> >>>>>>> >>>>>>>>>>>>>> >>>>>> >>>>>>>>>>>>>> >>>>> >>>>>>>>>>>>>> >>>> >>>>>>>>>>>>>> >>> >>>>>>>>>>>>>> >> >>>>>>>>>>>>>> > >>>>>>>>>>>>>> >>>>>>>>>>>>> --00000000000091c88405dadc582b Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
I have a deep interest in self-modifying programs. Th= ese are
trivial to create in lisp. The question is how to structu= re a computer
algebra program so it could be dynamically modified= but still
well-structured. This is especially important sin= ce the user can
create new logical types at runtime.
One innovation, which I have never seen anywhere, is to struct= ure
the program in a spreadsheet fashion. Spreadsheets cells can<= /div>
reference other spreadsheet cells. Spreadsheets have a well-defin= ed
evaluation method. This is equivalent to a form of object-orie= nted
programming where each cell is an object and has a well-defi= ned
inheritance hierarchy through other cells. Simple manipulatio= n
allows insertion, deletion, or modification of these chains at = any time.

So by structuring a computer algebra pro= gram like a spreadsheet
I am able to dynamically adjust a program= to fit the problem to be
solved, track which cells are needed, a= nd optimize the program.

Spreadsheets do this natu= rally but I'm unaware of any non-spreadsheet
program that rel= ies on that organization.

Tim

=

= On Sun, Mar 13, 2022 at 4:01 AM Tim Daly <axiomcas@gmail.com> wrote:
Axiom has an awkward 'a= ttributes' category structure.

In the SANE ver= sion it is clear that these attributes are much
closer to logic &= #39;definitions'. As a result one of the changes
is to create= a new 'category'-type structure for definitions.
There w= ill be a new keyword, like the category keyword,
'definition&= #39;.

Tim


On Fri, Mar 11, 2022= at 9:46 AM Tim Daly <axiomcas@gmail.com> wrote:
The github lockout continues.= ..

I'm spending some time adding examples= to source code.

Any function can have ++X comment= s added. These will
appear as examples when the function is )disp= lay For example,
in PermutationGroup there is a function 'str= ongGenerators'
defined as:

=C2=A0 st= rongGenerators : % -> L PERM S
=C2=A0=C2=A0=C2=A0 ++ strongGen= erators(gp) returns strong generators for
=C2=A0=C2=A0=C2=A0 ++ t= he group gp.
=C2=A0=C2=A0=C2=A0 ++
=C2=A0=C2=A0=C2=A0 += +X S:List(Integer) :=3D [1,2,3,4]
=C2=A0=C2=A0=C2=A0 ++X G :=3D s= ymmetricGroup(S)
=C2=A0=C2=A0=C2=A0 ++X strongGenerators(G)
=



Later, in the interpreter= we see:




<= div>)d op strongGenerators

=C2=A0 There is one exp= osed function called strongGenerators :
=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0 [1] PermutationGroup(D2) -> List(Permutation(D2)) from
= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0 PermutationGroup(D2)
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 if D2 has S= ETCAT

=C2=A0 Examples of strongGenerators from Per= mutationGroup
=C2=A0
=C2=A0 S:List(Integer) :=3D [= 1,2,3,4]
=C2=A0 G :=3D symmetricGroup(S)
=C2=A0 strongG= enerators(G)




This will show a working example for functions that the
u= ser can copy and use. It is especially useful to show how
to cons= truct working arguments.

These "example"= functions are run at build time when
the make command looks like=
=C2=A0=C2=A0=C2=A0 make TESTSET=3Dalltests

I hope to add this documentation to all Axiom functions.
=
In addition, the plan is to add these function calls to the<= /div>
usual test documentation. That means that all of these examples
will be run and show their output in the final distribution
<= div>(mnt/ubuntu/doc/src/input/*.dvi files) so the user can view
t= he expected output.

Tim





Axiom's SPAD cod= e compiles to Common Lisp.
The AKCL version of Common Lisp compil= es to C.
Three languages and 2 compilers is a lot to maintain.
Further, there are very few people able to write SPAD
and= even fewer people able to maintain it.

I'= ve decided that the SANE version of Axiom will be
implemente= d in pure Common Lisp. I've outlined Axiom's
category= / type hierarchy in the Common Lisp Object
System (CLOS). I am n= ow experimenting with re-writing
the functions into Common Lisp.<= br>

This will have several long-term effects. It s= implifies
the implementation issues. SPAD code blocks a lot of
actions and optimizations that Common Lisp provides.
The = Common Lisp language has many more people
who can read, modif= y, and maintain code. It provides for
interoperability with other= Common Lisp projects with
no effort. Common Lisp is an internati= onal standard
which ensures that the code will continue to run.

The input / output mathematics will remain the = same.
Indeed, with the new generalizations for first-class
<= div>dependent types it will be more general.

This = is a big change, similar to eliminating BOOT code
and moving to L= iterate Programming. This will provide a
better platform for futu= re research work. Current research
is focused on merging Axiom= 9;s computer algebra mathematics
with Lean's proof language. = The goal is to create a system for
=C2=A0"computational math= ematics".

Research is the whole point of = Axiom.

Tim



On S= at, Jan 22, 2022 at 9:16 PM Tim Daly <axiomcas@gmail.com> wrote:
I can't s= tress enough how important it is to listen to Hamming's talk

Axiom will begin to die the day I stop working on it.

=
However, proving Axiom correct "down to the metal", is funda= mental.
It will merge computer algebra and logic, spawning years = of new
research.

Work on fundamental pro= blems.

Tim


On Thu, Dec 30, 202= 1 at 6:46 PM Tim Daly <axiomcas@gmail.com> wrote:
One of the interesting quest= ions when obtaining a result
is "what functions were called = and what was their return value?"
Otherwise known as the &qu= ot;show your work" idea.

There is an idea cal= led the "writer monad" [0], usually
implemented to= facilitate logging. We can exploit this
idea to provide "sh= ow your work" capability. Each function
can provide this inf= ormation inside the monad enabling the
question to be answered at= any time.

For those unfamiliar with the monad ide= a, the best explanation
I've found is this video [1].

Tim

[0] Deriving the writer = monad from first principles
[1] The Absolute Best Intro to Monads for Software Engineers

<= div class=3D"gmail_quote">
On Mon, Dec= 13, 2021 at 12:30 AM Tim Daly <axiomcas@gmail.com> wrote:
...(snip)...

Common Lisp has an "open compiler". That a= llows the ability
to deeply modify compiler behavior using compil= er macros
and macros in general. CLOS takes advantage of this to = add
typed behavior into the compiler in a way that ALLOWS strict<= /div>
typing such as found in constructive type theory and ML.
Judgments, ala Crary, are front-and-center.

= Whether you USE the discipline afforded is the real question.
Indeed, the Axiom research struggle is essentially one of how
to have a disciplined use of first-class dependent types. The
struggle raises issues of, for example, compiling a dependent
type whose argument is recursive in the compiled type. Since
t= he new type is first-class it can be constructed at what you
impr= operly call "run-time". However, it appears that the recursive
type may have to call the compiler at each recursion to generate
the next step since in some cases it cannot generate "closed c= ode".

I am embedding proofs (in LEAN lang= uage) into the type
hierarchy so that theorems, which depend on t= he type hierarchy,
are correctly inherited. The compiler has = to check the proofs of functions
at compile time using these. Hac= king up nonsense just won't cut it. Think
of the problem of e= mbedding LEAN proofs in ML or ML in LEAN.
(Actually, Jeremy Aviga= d might find that research interesting.)

So Matrix= (3,3,Float) has inverses (assuming Float is a
field (cough)). The= type inherits this theorem and proofs of
functions can use this.= But Matrix(3,4,Integer) does not have
inverses so the proofs can= not use this. The type hierarchy has
to ensure that the proper th= eorems get inherited.

Making proof technology = work at compile time is hard.
(Worse yet, LEAN is a moving target= . Sigh.)



<= div dir=3D"ltr" class=3D"gmail_attr">On Thu, Nov 25, 2021 at 9:43 AM Tim Da= ly <axiomcas@gma= il.com> wrote:

As you know I've been re-architecting Axiom to use first class
<= div>dependent types and proving the algorithms correct. For example,
<= div>the GCD of natural numbers or the GCD of polynomials.

The idea involves "boxing up" the proof with the algorith= m (aka
proof carrying code) in the ELF file (under a crypto hash = so it
can't be changed).

Once the co= de is running on the CPU, the proof is run in parallel
on the fie= ld programmable gate array (FPGA). Intel data center
servers have= CPUs with built-in FPGAs these days.

There is a b= it of a disconnect, though. The GCD code is compiled
machine code= but the proof is LEAN-level.

What would be ideal = is if the compiler not only compiled the GCD
code to machine code= , it also compiled the proof to "machine code".
That is= , for each machine instruction, the FPGA proof checker
would ensu= re that the proof was not violated at the individual
instruction = level.

What does it mean to "compile a pr= oof to the machine code level"?

The Milawa ef= fort (Myre14.pdf) does incremental proofs in layers.
To quote fro= m the article [0]:

=C2=A0=C2=A0 We begin with = a simple proof checker, call it A, which is short
=C2=A0=C2=A0 en= ough to verify by the ``social process'' of mathematics -- and
=C2=A0 more recently with a theorem prover for a more expressive logi= c.

=C2=A0=C2=A0 We then develop a series of increa= singly powerful proof checkers,
=C2=A0 call the B, C, D, and so o= n. We show each of these programs only
=C2=A0=C2=A0 accepts the s= ame formulas as A, using A to verify B, and B to verify
=C2=A0=C2= =A0 C, and so on. Then, since we trust A, and A says B is trustworthy, we
=C2=A0=C2=A0 can trust B. Then, since we trust B, and B says C is = trustworthy, we
=C2=A0=C2=A0 can trust C.

This gives a technique for "compiling the proof" down the = the machine
code level. Ideally, the compiler would have judgment= s for each step of
the compilation so that each compile step has = a justification. I don't
know of any compiler that does this = yet. (References welcome).

At the machine= code level, there are techniques that would allow
the FPGA proof= to "step in sequence" with the executing code.
Som= e work has been done on using "Hoare Logic for Realistically
Modelled Machine Code" (paper attached, Myre07a.pdf),
"= ;Decompilation into Logic -- Improved (Myre12a.pdf).

So the game is to construct a GCD over some type (Nats, Polys, etc.
Axiom has 22), compile the dependent type GCD to machine code.
<= div>In parallel, the proof of the code is compiled to machine code. The
pair is sent to the CPU/FPGA and, while the algorithm runs, the FPGA=
ensures the proof is not violated, instruction by instruction.

(I'm ignoring machine architecture issues such = pipelining, out-of-order,
branch prediction, and other machine-le= vel things to ponder. I'm looking
at the RISC-V Verilog detai= ls by various people to understand better but
it is still a "= ;misty fog" for me.)

The result is proven= code "down to the metal".

Tim




On = Thu, Nov 25, 2021 at 6:05 AM Tim Daly <axiomcas@gmail.com> wrote:

As you know I've been re-architecting Axiom to use first class=
dependent types and proving the algorithms correct. For example,=
the GCD of natural numbers or the GCD of polynomials.
=
The idea involves "boxing up" the proof with the a= lgorithm (aka
proof carrying code) in the ELF file (under a crypt= o hash so it
can't be changed).

Once= the code is running on the CPU, the proof is run in parallel
on = the field programmable gate array (FPGA). Intel data center
serve= rs have CPUs with built-in FPGAs these days.

There= is a bit of a disconnect, though. The GCD code is compiled
machi= ne code but the proof is LEAN-level.

What would be= ideal is if the compiler not only compiled the GCD
code to machi= ne code, it also compiled the proof to "machine code".
= That is, for each machine instruction, the FPGA proof checker
wou= ld ensure that the proof was not violated at the individual
instr= uction level.

What does it mean to "compi= le a proof to the machine code level"?

The Mi= lawa effort (Myre14.pdf) does incremental proofs in layers.
To qu= ote from the article [0]:

=C2=A0=C2=A0 We begi= n with a simple proof checker, call it A, which is short
=C2=A0= =C2=A0 enough to verify by the ``social process'' of mathematics --= and
=C2=A0 more recently with a theorem prover for a more expres= sive logic.

=C2=A0=C2=A0 We then develop a series = of increasingly powerful proof checkers,
=C2=A0 call the B, C, D,= and so on. We show each of these programs only
=C2=A0=C2=A0 acce= pts the same formulas as A, using A to verify B, and B to verify
= =C2=A0=C2=A0 C, and so on. Then, since we trust A, and A says B is trustwor= thy, we
=C2=A0=C2=A0 can trust B. Then, since we trust B, and B s= ays C is trustworthy, we
=C2=A0=C2=A0 can trust C.

This gives a technique for "compiling the proof" = down the the machine
code level. Ideally, the compiler would have= judgments for each step of
the compilation so that each compile = step has a justification. I don't
know of any compiler that d= oes this yet. (References welcome).

At th= e machine code level, there are techniques that would allow
the F= PGA proof to "step in sequence" with the executing code.
Some work has been done on using "Hoare Logic for Realistically<= /div>
Modelled Machine Code" (paper attached, Myre07a.pdf),
<= div>"Decompilation into Logic -- Improved (Myre12a.pdf).
So the game is to construct a GCD over some type (Nats, Polys, = etc.
Axiom has 22), compile the dependent type GCD to machine cod= e.
In parallel, the proof of the code is compiled to machine code= . The
pair is sent to the CPU/FPGA and, while the algorithm runs,= the FPGA
ensures the proof is not violated, instruction by instr= uction.

(I'm ignoring machine architecture iss= ues such pipelining, out-of-order,
branch prediction, and other m= achine-level things to ponder. I'm looking
at the RISC-V Veri= log details by various people to understand better but
it is stil= l a "misty fog" for me.)

The result = is proven code "down to the metal".

=

= On Sat, Nov 13, 2021 at 5:28 PM Tim Daly <axiomcas@gmail.com> wrote:
Full supp= ort for general, first-class dependent types requires
some change= s to the Axiom design. That implies some language
design question= s.

Given that mathematics is such a general subjec= t with a lot of
"local" notation and ideas (witness log= ical judgment notation)
careful thought is needed to design a lan= guage that is able to
handle a wide range.

Normally language design is a two-level process. The language
= designer creates a language and then an implementation. Various
d= esign choices affect the final language.

There= is "The Metaobject Protocol" (MOP)
which encourages a three-level process. Th= e language designer
works at a Metalevel to design a family = of languages, then the
language specializations, then the impleme= ntation. A MOP design
allows the language user to optimize the la= nguage to their problem.

A simple paper on the sub= ject is "Metaobject Protocols"

Tim

<= /div>
O= n Mon, Oct 25, 2021 at 7:42 PM Tim Daly <axiomcas@gmail.com> wrote:
I have a s= eparate thread of research on Self-Replicating Systems
(ref: Kine= matics of Self Reproducing Machines

which led to watching "S= trange Dreams of Stranger Loops" by Will Byrd

Will referenc= ed a PhD Thesis by Jon Doyle
"A Model for Deliberation, Acti= on, and Introspection"

I also read the thesis= by J.C.G. Sturdy
"A Lisp through the Looking Glass"

Self-replication requires the ability to manipulate = your own
representation in such a way that changes to that repres= entation
will change behavior.

This lead= s to two thoughts in the SANE research.

First, &qu= ot;Declarative Representation". That is, most of the things
= about the representation should be declarative rather than
proced= ural. Applying this idea as much as possible makes it
easier to u= nderstand and manipulate.

Second, "Explic= it Call Stack". Function calls form an implicit
call stack. = This can usually be displayed in a running lisp system.
However, = having the call stack explicitly available would mean
that a syst= em could "introspect" at the first-class level.

These two ideas would make it easy, for example, to let the
system "show the work". One of the normal complaints is that
a system presents an answer but there is no way to know how
<= div>that answer was derived. These two ideas make it possible to
= understand, display, and even post-answer manipulate
the intermed= iate steps.

Having the intermediate steps also all= ows proofs to be
inserted in a step-by-step fashion. This aids th= e effort to
have proofs run in parallel with computation at the h= ardware
level.

Tim






<= div class=3D"gmail_quote">
On Thu, Oct= 21, 2021 at 9:50 AM Tim Daly <axiomcas@gmail.com> wrote:
So the current strug= gle involves the categories in Axiom.

The categori= es and domains constructed using categories
are dependent types. = When are dependent types "equal"?
Well, hummmm, that de= pends on the arguments to the
constructor.

But in order to decide(?) equality we have to evaluate
the arg= uments (which themselves can be dependent types).
Indeed, we may,= and in general, we must evaluate the
arguments at compile t= ime (well, "construction time" as
there isn't reall= y a compiler / interpreter separation anymore.)

That raises the question of what "equality" means. This
is not simply a "set equality" relation. It falls into the
infinite-groupoid of homotopy type theory. In general
it a= ppears that deciding category / domain equivalence
might force us= to climb the type hierarchy.

Beyond that, there i= s the question of "which proof"
applies to the resultin= g object. Proofs depend on their
assumptions which might be diffe= rent for different
constructions. As yet I have no clue how to &q= uot;index"
proofs based on their assumptions, nor how to
connect these assumptions to the groupoid structure.
=
My brain hurts.

Tim

<= /div>

On Mon, Oct 18, 2021 at 2:00 AM Tim Daly <axiomcas@gmail.com> wrote:
<= blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-l= eft:1px solid rgb(204,204,204);padding-left:1ex">
&quo= t;Birthing Computational Mathematics"

The Axi= om SANE project is difficult at a very fundamental
level. The tit= le "SANE" was chosen due to the various
words found in = a thesuarus... "rational", "coherent",
"= judicious" and "sound".

These are v= ery high level, amorphous ideas. But so is
the design of SANE. Br= eaking away from tradition in
computer algebra, type theory, and = proof assistants
is very difficult. Ideas tend to fall into stand= ard jargon
which limits both the frame of thinking (e.g. dependen= t
types) and the content (e.g. notation).

Questioning both frame and content is very difficult.
It is har= d to even recognize when they are accepted
"by default"= rather than "by choice". What does the idea
"= power tools" mean in a primitive, hand labor culture?
Christopher Alexander [0] addresses this problem in
= a lot of his writing. Specifically, in his book "Notes on
th= e Synthesis of Form", in his chapter 5 "The Selfconsious
Process", he addresses this problem directly. This is a
&q= uot;must read" book.

Unlike building desi= gn and contruction, however, there
are almost no constraints to u= se as guides. Alexander
quotes Plato's Phaedrus:
=C2=A0 "First, the taking in of scattered particulars und= er
=C2=A0=C2=A0 one Idea, so that everyone understands what is be= ing
=C2=A0=C2=A0 talked about ... Second, the separation of the I= dea
=C2=A0=C2=A0 into parts, by dividing it at the joints, as nat= ure
=C2=A0=C2=A0 directs, not breaking any limb in half as a bad =
=C2=A0=C2=A0 carver might."

Lisp, which has been called "clay for the mind" can
b= uild virtually anything that can be thought. The
"joint= s" are also "of one's choosing" so one is
both= carver and "nature".

Clearly the pr= oblem is no longer "the tools".
*I* am the problem cons= training the solution.
Birthing this "new thing" is slo= w, difficult, and
uncertain at best.

Tim=

[0] Alexander, Christopher "Notes on the Syn= thesis
of Form" Harvard University Press 1964
ISBN 0-674-62751-2


On Sun, Oct 10, 2021 at 4:40 PM Tim= Daly <axiomcas@= gmail.com> wrote:
Re: writing a paper... I'm not connected to Academia
so anything I'd write would never make it into print.

"Language level parsing" is still a long way off. The talk
by Guy Steele [2] highlights some of the problems we
currently face using mathematical metanotation.

For example, a professor I know at CCNY (City College
of New York) didn't understand Platzer's "funny
fraction notation" (proof judgements) despite being
an expert in Platzer's differential equations area.

Notation matters and is not widely common.

I spoke to Professor Black (in LTI) about using natural
language in the limited task of a human-robot cooperation
in changing a car tire.=C2=A0 I looked at the current machine
learning efforts. They are no where near anything but
toy systems, taking too long to train and are too fragile.

Instead I ended up using a combination of AIML [3]
(Artificial Intelligence Markup Language), the ALICE
Chatbot [4], Forgy's OPS5 rule based program [5],
and Fahlman's SCONE [6] knowledge base. It was
much less fragile in my limited domain problem.

I have no idea how to extend any system to deal with
even undergraduate mathematics parsing.

Nor do I have any idea how I would embed LEAN
knowledge into a SCONE database, although I
think the combination would be useful and interesting.

I do believe that, in the limited area of computational
mathematics, we are capable of building robust, proven
systems that are quite general and extensible. As you
might have guessed I've given it a lot of thought over
the years :-)

A mathematical language seems to need >6 components

1) We need some sort of a specification language, possibly
somewhat 'propositional' that introduces the assumptions
you mentioned (ref. your discussion of numbers being
abstract and ref. your discussion of relevant choice of
assumptions related to a problem).

This is starting to show up in the hardware area (e.g.
Lamport's TLC[0])

Of course, specifications relate to proving programs
and, as you recall, I got a cold reception from the
LEAN community about using LEAN for program proofs.

2) We need "scaffolding". That is, we need a theory
that can be reduced to some implementable form
that provides concept-level structure.

Axiom uses group theory for this. Axiom's "category"
structure has "Category" things like Ring. Claiming
to be a Ring brings in a lot of "Signatures" of functions
you have to implement to properly be a Ring.

Scaffolding provides a firm mathematical basis for
design. It provides a link between the concept of a
Ring and the expectations you can assume when
you claim your "Domain" "is a Ring". Category
theory might provide similar structural scaffolding
(eventually... I'm still working on that thought garden)

LEAN ought to have a textbook(s?) that structures
the world around some form of mathematics. It isn't
sufficient to say "undergraduate math" is the goal.
There needs to be some coherent organization so
people can bring ideas like Group Theory to the
organization. Which brings me to ...

3) We need "spreading". That is, we need to take
the various definitions and theorems in LEAN and
place them in their proper place in the scaffold.

For example, the Ring category needs the definitions
and theorems for a Ring included in the code for the
Ring category. Similarly, the Commutative category
needs the definitions and theorems that underlie
"commutative" included in the code.

That way, when you claim to be a "Commutative Ring"
you get both sets of definitions and theorems. That is,
the inheritance mechanism will collect up all of the
definitions and theorems and make them available
for proofs.

I am looking at LEAN's definitions and theorems with
an eye to "spreading" them into the group scaffold of
Axiom.

4) We need "carriers" (Axiom calls them representations,
aka "REP"). REPs allow data structures to be defined
independent of the implementation.

For example, Axiom can construct Polynomials that
have their coefficients in various forms of representation.
You can define "dense" (all coefficients in a list),
"sparse" (only non-zero coefficients), "recursive", etc= .

A "dense polynomial" and a "sparse polynomial" work
exactly the same way as far as the user is concerned.
They both implement the same set of functions. There
is only a difference of representation for efficiency and
this only affects the implementation of the functions,
not their use.

Axiom "got this wrong" because it didn't sufficiently
separate the REP from the "Domain". I plan to fix this.

LEAN ought to have a "data structures" subtree that
has all of the definitions and axioms for all of the
existing data structures (e.g. Red-Black trees). This
would be a good undergraduate project.

5) We need "Domains" (in Axiom speak). That is, we
need a box that holds all of the functions that implement
a "Domain". For example, a "Polynomial Domain" would hold all of the functions for manipulating polynomials
(e.g polynomial multiplication). The "Domain" box
is a dependent type that:

=C2=A0 A) has an argument list of "Categories" that this "Do= main"
=C2=A0 =C2=A0 =C2=A0 box inherits. Thus, the "Integer Domain" inh= erits
=C2=A0 =C2=A0 =C2=A0 the definitions and axioms from "Commutative"= ;

=C2=A0 =C2=A0 =C2=A0Functions in the "Domain" box can now assume<= br> =C2=A0 =C2=A0 =C2=A0and use the properties of being commutative. Proofs
=C2=A0 =C2=A0 =C2=A0of functions in this domain can use the definitions
=C2=A0 =C2=A0 =C2=A0and proofs about being commutative.

=C2=A0 B) contains an argument that specifies the "REP"
=C2=A0 =C2=A0 =C2=A0 =C2=A0(aka, the carrier). That way you get all of the<= br> =C2=A0 =C2=A0 =C2=A0 =C2=A0functions associated with the data structure
=C2=A0 =C2=A0 =C2=A0 available for use in the implementation.

=C2=A0 =C2=A0 =C2=A0 Functions in the Domain box can use all of
=C2=A0 =C2=A0 =C2=A0 the definitions and axioms about the representation =C2=A0 =C2=A0 =C2=A0 (e.g. NonNegativeIntegers are always positive)

=C2=A0 C) contains local "spread" definitions and axioms
=C2=A0 =C2=A0 =C2=A0 =C2=A0that can be used in function proofs.

=C2=A0 =C2=A0 =C2=A0 For example, a "Square Matrix" domain would<= br> =C2=A0 =C2=A0 =C2=A0 have local axioms that state that the matrix is
=C2=A0 =C2=A0 =C2=A0 always square. Thus, functions in that box could
=C2=A0 =C2=A0 =C2=A0 use these additional definitions and axioms in
=C2=A0 =C2=A0 =C2=A0 function proofs.

=C2=A0 D) contains local state. A "Square Matrix" domain
=C2=A0 =C2=A0 =C2=A0 =C2=A0would be constructed as a dependent type that =C2=A0 =C2=A0 =C2=A0 =C2=A0specified the size of the square (e.g. a 2x2
=C2=A0 =C2=A0 =C2=A0 =C2=A0matrix would have '2' as a dependent par= ameter.

=C2=A0 E) contains implementations of inherited functions.

=C2=A0 =C2=A0 =C2=A0 =C2=A0A "Category" could have a signature fo= r a GCD
=C2=A0 =C2=A0 =C2=A0 =C2=A0function and the "Category" could have= a default
=C2=A0 =C2=A0 =C2=A0 =C2=A0implementation. However, the "Domain" = could
=C2=A0 =C2=A0 =C2=A0 =C2=A0have a locally more efficient implementation whi= ch
=C2=A0 =C2=A0 =C2=A0 =C2=A0overrides the inherited implementation.

=C2=A0 =C2=A0 =C2=A0 Axiom has about 20 GCD implementations that
=C2=A0 =C2=A0 =C2=A0 differ locally from the default in the category. They<= br> =C2=A0 =C2=A0 =C2=A0 use properties known locally to be more efficient.

=C2=A0 F) contains local function signatures.

=C2=A0 =C2=A0 =C2=A0 A "Domain" gives the user more and more uniq= ue
=C2=A0 =C2=A0 =C2=A0 functions. The signature have associated
=C2=A0 =C2=A0 =C2=A0 "pre- and post- conditions" that can be used=
=C2=A0 =C2=A0 =C2=A0 as assumptions in the function proofs.

=C2=A0 =C2=A0 =C2=A0 Some of the user-available functions are only
=C2=A0 =C2=A0 =C2=A0 visible if the dependent type would allow them
=C2=A0 =C2=A0 =C2=A0 to exist. For example, a general Matrix domain
=C2=A0 =C2=A0 =C2=A0 would have fewer user functions that a Square
=C2=A0 =C2=A0 =C2=A0 Matrix domain.

=C2=A0 =C2=A0 =C2=A0 In addition, local "helper" functions need t= heir
=C2=A0 =C2=A0 =C2=A0 own signatures that are not user visible.

=C2=A0 G) the function implementation for each signature.

=C2=A0 =C2=A0 =C2=A0 =C2=A0This is obviously where all the magic happens
=C2=A0 H) the proof of each function.

=C2=A0 =C2=A0 =C2=A0 =C2=A0This is where I'm using LEAN.

=C2=A0 =C2=A0 =C2=A0 =C2=A0Every function has a proof. That proof can use =C2=A0 =C2=A0 =C2=A0 =C2=A0all of the definitions and axioms inherited from=
=C2=A0 =C2=A0 =C2=A0 =C2=A0the "Category", "Representation&q= uot;, the "Domain
=C2=A0 =C2=A0 =C2=A0 =C2=A0Local", and the signature pre- and post- =C2=A0 =C2=A0 =C2=A0 =C2=A0conditions.

=C2=A0 =C2=A0I) literature links. Algorithms must contain a link
=C2=A0 =C2=A0 =C2=A0 to at least one literature reference. Of course,
=C2=A0 =C2=A0 =C2=A0 since everything I do is a Literate Program
=C2=A0 =C2=A0 =C2=A0 this is obviously required. Knuth said so :-)


LEAN ought to have "books" or "pamphlets" that
bring together all of this information for a domain
such as Square Matrices. That way a user can
find all of the related ideas, available functions,
and their corresponding proofs in one place.

6) User level presentation.

=C2=A0 =C2=A0 This is where the systems can differ significantly.
=C2=A0 =C2=A0 Axiom and LEAN both have GCD but they use
=C2=A0 =C2=A0 that for different purposes.

=C2=A0 =C2=A0 I'm trying to connect LEAN's GCD and Axiom's GCD<= br> =C2=A0 =C2=A0 so there is a "computational mathematics" idea that=
=C2=A0 =C2=A0 allows the user to connect proofs and implementations.

7) Trust

Unlike everything else, computational mathematics
can have proven code that gives various guarantees.

I have been working on this aspect for a while.
I refer to it as trust "down to the metal" The idea is
that a proof of the GCD function and the implementation
of the GCD function get packaged into the ELF format.
(proof carrying code). When the GCD algorithm executes
on the CPU, the GCD proof is run through the LEAN
proof checker on an FPGA in parallel.

(I just recently got a PYNQ Xilinx board [1] with a CPU
and FPGA together. I'm trying to implement the LEAN
proof checker on the FPGA).

We are on the cusp of a revolution in computational
mathematics. But the two pillars (proof and computer
algebra) need to get know each other.

Tim



[0] Lamport, Leslie "Chapter on TLA+"
in "Software Specification Methods"
https://www.springer.com/gp/book/9781852333539
(I no longer have CMU library access or I'd send you
the book PDF)

[1] https://www.tul.com.tw/productspynq-z2.html

[2] https://www.youtube.com/watch?v=3DdCuZkaaou0Q
[3] "ARTIFICIAL INTELLIGENCE MARKUP LANGUAGE"
https://arxiv.org/pdf/1307.3091.pdf

[4] ALICE Chatbot
http://www.scielo.org.mx/pdf/cys= /v19n4/1405-5546-cys-19-04-00625.pdf

[5] OPS5 User Manual
https://kilthub.cm= u.edu/articles/journal_contribution/OPS5_user_s_manual/6608090/1

[6] Scott Fahlman "SCONE"
http://www.cs.cmu.edu/~sef/scone/

On 9/27/21, Tim Daly <axiomcas@gmail.com> wrote:
> I have tried to maintain a list of names of people who have
> helped Axiom, going all the way back to the pre-Scratchpad
> days. The names are listed at the beginning of each book.
> I also maintain a bibliography of publications I've read or
> that have had an indirect influence on Axiom.
>
> Credit is "the coin of the realm". It is easy to share and w= rong
> to ignore. It is especially damaging to those in Academia who
> are affected by credit and citations in publications.
>
> Apparently I'm not the only person who feels that way. The ACM
> Turing award seems to have ignored a lot of work:
>
> Scientific Integrity, the 2021 Turing Lecture, and the 2018 Turing
> Award for Deep Learning
> https://pe= ople.idsia.ch/~juergen/scientific-integrity-turing-award-deep-learning.html=
>
> I worked on an AI problem at IBM Research called Ketazolam.
> (https://en.wikipedia.org/wiki/Ketazolam). The idea = was to recognize
> and associated 3D chemical drawings with their drug counterparts.
> I used Rumelhart, and McClelland's books. These books contained > quite a few ideas that seem to be "new and innovative" among= the
> machine learning crowd... but the books are from 1987. I don't bel= ieve
> I've seen these books mentioned in any recent bibliography.
> https://mitpress.mit.edu= /books/parallel-distributed-processing-volume-1
>
>
>
>
> On 9/27/21, Tim Daly <axiomcas@gmail.com> wrote:
>> Greg Wilson asked "How Reliable is Scientific Software?"=
>> https://ne= verworkintheory.org/2021/09/25/how-reliable-is-scientific-software.html=
>>
>> which is a really interesting read. For example"
>>
>>=C2=A0 [Hatton1994], is now a quarter of a century old, but its con= clusions
>> are still fresh. The authors fed the same data into nine commercia= l
>> geophysical software packages and compared the results; they found=
>> that, "numerical disagreement grows at around the rate of 1% = in
>> average absolute difference per 4000 fines of implemented code, an= d,
>> even worse, the nature of the disagreement is nonrandom" (i.e= ., the
>> authors of different packages make similar mistakes).
>>
>>
>> On 9/26/21, Tim Daly <axiomcas@gmail.com> wrote:
>>> I should note that the lastest board I've just unboxed
>>> (a PYNQ-Z2) is a Zynq Z-7020 chip from Xilinx (AMD).
>>>
>>> What makes it interesting is that it contains 2 hard
>>> core processors and an FPGA, connected by 9 paths
>>> for communication. The processors can be run
>>> independently so there is the possibility of a parallel
>>> version of some Axiom algorithms (assuming I had
>>> the time, which I don't).
>>>
>>> Previously either the hard (physical) processor was
>>> separate from the FPGA with minimal communication
>>> or the soft core processor had to be created in the FPGA
>>> and was much slower.
>>>
>>> Now the two have been combined in a single chip.
>>> That means that my effort to run a proof checker on
>>> the FPGA and the algorithm on the CPU just got to
>>> the point where coordination is much easier.
>>>
>>> Now all I have to do is figure out how to program this
>>> beast.
>>>
>>> There is no such thing as a simple job.
>>>
>>> Tim
>>>
>>>
>>> On 9/26/21, Tim Daly <axiomcas@gmail.com> wrote:
>>>> I'm familiar with most of the traditional approaches >>>> like Theorema. The bibliography contains most of the
>>>> more interesting sources. [0]
>>>>
>>>> There is a difference between traditional approaches to >>>> connecting computer algebra and proofs and my approach. >>>>
>>>> Proving an algorithm, like the GCD, in Axiom is hard.
>>>> There are many GCDs (e.g. NNI vs POLY) and there
>>>> are theorems and proofs passed at runtime in the
>>>> arguments of the newly constructed domains. This
>>>> involves a lot of dependent type theory and issues of
>>>> compile time / runtime argument evaluation. The issues
>>>> that arise are difficult and still being debated in the ty= pe
>>>> theory community.
>>>>
>>>> I am putting the definitions, theorems, and proofs (DTP) >>>> directly into the category/domain hierarchy. Each category=
>>>> will have the DTP specific to it. That way a commutative >>>> domain will inherit a commutative theorem and a
>>>> non-commutative domain will not.
>>>>
>>>> Each domain will have additional DTPs associated with
>>>> the domain (e.g. NNI vs Integer) as well as any DTPs
>>>> it inherits from the category hierarchy. Functions in the<= br> >>>> domain will have associated DTPs.
>>>>
>>>> A function to be proven will then inherit all of the relev= ant
>>>> DTPs. The proof will be attached to the function and
>>>> both will be sent to the hardware (proof-carrying code). >>>>
>>>> The proof checker, running on a field programmable
>>>> gate array (FPGA), will be checked at runtime in
>>>> parallel with the algorithm running on the CPU
>>>> (aka "trust down to the metal"). (Note that Inte= l
>>>> and AMD have built CPU/FPGA combined chips,
>>>> currently only available in the cloud.)
>>>>
>>>>
>>>>
>>>> I am (slowly) making progress on the research.
>>>>
>>>> I have the hardware and nearly have the proof
>>>> checker from LEAN running on my FPGA.
>>>>
>>>> I'm in the process of spreading the DTPs from
>>>> LEAN across the category/domain hierarchy.
>>>>
>>>> The current Axiom build extracts all of the functions
>>>> but does not yet have the DTPs.
>>>>
>>>> I have to restructure the system, including the compiler >>>> and interpreter to parse and inherit the DTPs. I
>>>> have some of that code but only some of the code
>>>> has been pushed to the repository (volume 15) but
>>>> that is rather trivial, out of date, and incomplete.
>>>>
>>>> I'm clearly not smart enough to prove the Risch
>>>> algorithm and its associated machinery but the needed
>>>> definitions and theorems will be available to someone
>>>> who wants to try.
>>>>
>>>> [0] https://github.com/daly/= PDFS/blob/master/bookvolbib.pdf
>>>>
>>>>
>>>> On 8/19/21, Tim Daly <axiomcas@gmail.com> wrote:
>>>>> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
>>>>>
>>>>> REVIEW (Axiom on WSL2 Windows)
>>>>>
>>>>>
>>>>> So the steps to run Axiom from a Windows desktop
>>>>>
>>>>> 1 Windows) install XMing on Windows for X11 server
>>>>>
>>>>> http://www.straightrunning.com/XmingN= otes/
>>>>>
>>>>> 2 WSL2) Install Axiom in WSL2
>>>>>
>>>>> sudo apt install axiom
>>>>>
>>>>> 3 WSL2) modify /usr/bin/axiom to fix the bug:
>>>>> (someone changed the axiom startup script.
>>>>> It won't work on WSL2. I don't know who or
>>>>> how to get it fixed).
>>>>>
>>>>> sudo emacs /usr/bin/axiom
>>>>>
>>>>> (split the line into 3 and add quote marks)
>>>>>
>>>>> export SPADDEFAULT=3D/usr/local/axiom/mnt/linux
>>>>> export AXIOM=3D/usr/lib/axiom-20170501
>>>>> export "PATH=3D/usr/lib/axiom-20170501/bin:$PATH&= quot;
>>>>>
>>>>> 4 WSL2) create a .axiom.input file to include startup = cmds:
>>>>>
>>>>> emacs .axiom.input
>>>>>
>>>>> )cd "/mnt/c/yourpath"
>>>>> )sys pwd
>>>>>
>>>>> 5 WSL2) create a "myaxiom" command that sets= the
>>>>>=C2=A0 =C2=A0 =C2=A0DISPLAY variable and starts axiom >>>>>
>>>>> emacs myaxiom
>>>>>
>>>>> #! /bin/bash
>>>>> export DISPLAY=3D:0.0
>>>>> axiom
>>>>>
>>>>> 6 WSL2) put it in the /usr/bin directory
>>>>>
>>>>> chmod +x myaxiom
>>>>> sudo cp myaxiom /usr/bin/myaxiom
>>>>>
>>>>> 7 WINDOWS) start the X11 server
>>>>>
>>>>> (XMing XLaunch Icon on your desktop)
>>>>>
>>>>> 8 WINDOWS) run myaxiom from PowerShell
>>>>> (this should start axiom with graphics available)
>>>>>
>>>>> wsl myaxiom
>>>>>
>>>>> 8 WINDOWS) make a PowerShell desktop
>>>>>
>>>>> https://superuser.com/questions/886951/run-powershell-script-when-you= -open-powershell
>>>>>
>>>>> Tim
>>>>>
>>>>> On 8/13/21, Tim Daly <axiomcas@gmail.com> wrote:
>>>>>> A great deal of thought is directed toward making = the SANE version
>>>>>> of Axiom as flexible as possible, decoupling mecha= nism from theory.
>>>>>>
>>>>>> An interesting publication by Brian Cantwell Smith= [0], "Reflection
>>>>>> and Semantics in LISP" seems to contain inter= esting ideas related
>>>>>> to our goal. Of particular interest is the ability= to reason about
>>>>>> and
>>>>>> perform self-referential manipulations. In a depen= dently-typed
>>>>>> system it seems interesting to be able "adapt= " code to handle
>>>>>> run-time computed arguments to dependent functions= . The abstract:
>>>>>>
>>>>>>=C2=A0 =C2=A0 "We show how a computational sys= tem can be constructed to
>>>>>> "reason",
>>>>>> effectively
>>>>>>=C2=A0 =C2=A0 and consequentially, about its own in= ferential processes. The
>>>>>> analysis proceeds in two
>>>>>>=C2=A0 =C2=A0 parts. First, we consider the general= question of computational
>>>>>> semantics, rejecting
>>>>>>=C2=A0 =C2=A0 traditional approaches, and arguing t= hat the declarative and
>>>>>> procedural aspects of
>>>>>>=C2=A0 =C2=A0 computational symbols (what they stan= d for, and what behaviour
>>>>>> they
>>>>>> engender) should be
>>>>>>=C2=A0 =C2=A0 analysed independently, in order that= they may be coherently
>>>>>> related. Second, we
>>>>>>=C2=A0 =C2=A0 investigate self-referential behavior= in computational processes,
>>>>>> and show how to embed an
>>>>>>=C2=A0 =C2=A0 effective procedural model of a compu= tational calculus within that
>>>>>> calculus (a model not
>>>>>>=C2=A0 =C2=A0 unlike a meta-circular interpreter, b= ut connected to the
>>>>>> fundamental operations of the
>>>>>>=C2=A0 =C2=A0 machine in such a way as to provide, = at any point in a
>>>>>> computation,
>>>>>> fully articulated
>>>>>>=C2=A0 =C2=A0 descriptions of the state of that com= putation, for inspection and
>>>>>> possible modification). In
>>>>>>=C2=A0 =C2=A0 terms of the theories that result fro= m these investigations, we
>>>>>> present a general architecture
>>>>>>=C2=A0 =C2=A0 for procedurally reflective processes= , able to shift smoothly
>>>>>> between dealing with a given
>>>>>>=C2=A0 =C2=A0 subject domain, and dealing with thei= r own reasoning processes
>>>>>> over
>>>>>> that domain.
>>>>>>
>>>>>>=C2=A0 =C2=A0 An instance of the general solution i= s worked out in the context
>>>>>> of
>>>>>> an applicative
>>>>>>=C2=A0 =C2=A0 language. Specifically, we present th= ree successive dialects of
>>>>>> LISP: 1-LISP, a distillation of
>>>>>>=C2=A0 =C2=A0 current practice, for comparison purp= oses; 2-LISP, a dialect
>>>>>> constructed in terms of our
>>>>>>=C2=A0 =C2=A0 rationalised semantics, in which the = concept of evaluation is
>>>>>> rejected in favour of
>>>>>>=C2=A0 =C2=A0 independent notions of simplification= and reference, and in which
>>>>>> the respective categories
>>>>>>=C2=A0 =C2=A0 of notation, structure, semantics, an= d behaviour are strictly
>>>>>> aligned; and 3-LISP, an
>>>>>>=C2=A0 =C2=A0 extension of 2-LISP endowed with refl= ective powers."
>>>>>>
>>>>>> Axiom SANE builds dependent types on the fly. The = ability to access
>>>>>> both the refection
>>>>>> of the tower of algebra and the reflection of the = tower of proofs at
>>>>>> the time of construction
>>>>>> makes the construction of a new domain or specific= algorithm easier
>>>>>> and more general.
>>>>>>
>>>>>> This is of particular interest because one of the = efforts is to build
>>>>>> "all the way down to the
>>>>>> metal". If each layer is constructed on top o= f previous proven layers
>>>>>> and the new layer
>>>>>> can "reach below" to lower layers then t= he tower of layers can be
>>>>>> built without duplication.
>>>>>>
>>>>>> Tim
>>>>>>
>>>>>> [0], Smith, Brian Cantwell "Reflection and Se= mantics in LISP"
>>>>>> POPL '84: Proceedings of the 11th ACM SIGACT-S= IGPLAN
>>>>>> ymposium on Principles of programming languagesJan= uary 1
>>>>>> 984 Pages 23=E2=80=9335https://doi.org= /10.1145/800017.800513
>>>>>>
>>>>>> On 6/29/21, Tim Daly <axiomcas@gmail.com> wrote:
>>>>>>> Having spent time playing with hardware it is = perfectly clear that
>>>>>>> future computational mathematics efforts need = to adapt to using
>>>>>>> parallel processing.
>>>>>>>
>>>>>>> I've spent a fair bit of time thinking abo= ut structuring Axiom to
>>>>>>> be parallel. Most past efforts have tried to f= ocus on making a
>>>>>>> particular algorithm parallel, such as a matri= x multiply.
>>>>>>>
>>>>>>> But I think that it might be more effective to= make each domain
>>>>>>> run in parallel. A computation crosses multipl= e domains so a
>>>>>>> particular computation could involve multiple = parallel copies.
>>>>>>>
>>>>>>> For example, computing the Cylindrical Algebra= ic Decomposition
>>>>>>> could recursively decompose the plane. Indeed,= any tree-recursive
>>>>>>> algorithm could be run in parallel "in th= e large" by creating new
>>>>>>> running copies of the domain for each sub-prob= lem.
>>>>>>>
>>>>>>> So the question becomes, how does one manage t= his?
>>>>>>>
>>>>>>> A similar problem occurs in robotics where one= could have multiple
>>>>>>> wheels, arms, propellers, etc. that need to ac= t independently but
>>>>>>> in coordination.
>>>>>>>
>>>>>>> The robot solution uses ROS2. The three ideas = are ROSCORE,
>>>>>>> TOPICS with publish/subscribe, and SERVICES wi= th request/response.
>>>>>>> These are communication paths defined between = processes.
>>>>>>>
>>>>>>> ROS2 has a "roscore" which is basica= lly a phonebook of "topics".
>>>>>>> Any process can create or look up the current = active topics. eq:
>>>>>>>
>>>>>>>=C2=A0 =C2=A0 rosnode list
>>>>>>>
>>>>>>> TOPICS:
>>>>>>>
>>>>>>> Any process can PUBLISH a topic (which is basi= cally a typed data
>>>>>>> structure), e.g the topic /hw with the String = data "Hello World".
>>>>>>> eg:
>>>>>>>
>>>>>>>=C2=A0 =C2=A0 rostopic pub /hw std_msgs/String = "Hello, World"
>>>>>>>
>>>>>>> Any process can SUBSCRIBE to a topic, such as = /hw, and get a
>>>>>>> copy of the data.=C2=A0 eg:
>>>>>>>
>>>>>>>=C2=A0 =C2=A0 rostopic echo /hw=C2=A0 =C2=A0=3D= =3D> "Hello, World"
>>>>>>>
>>>>>>> Publishers talk, subscribers listen.
>>>>>>>
>>>>>>>
>>>>>>> SERVICES:
>>>>>>>
>>>>>>> Any process can make a REQUEST of a SERVICE an= d get a RESPONSE.
>>>>>>> This is basically a remote function call.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> Axiom in parallel?
>>>>>>>
>>>>>>> So domains could run, each in its own process.= It could provide
>>>>>>> services, one for each function. Any other pro= cess could request
>>>>>>> a computation and get the result as a response= . Domains could
>>>>>>> request services from other domains, either wa= iting for responses
>>>>>>> or continuing while the response is being comp= uted.
>>>>>>>
>>>>>>> The output could be sent anywhere, to a termin= al, to a browser,
>>>>>>> to a network, or to another process using the = publish/subscribe
>>>>>>> protocol, potentially all at the same time sin= ce there can be many
>>>>>>> subscribers to a topic.
>>>>>>>
>>>>>>> Available domains could be dynamically added b= y announcing
>>>>>>> themselves as new "topics" and could= be dynamically looked-up
>>>>>>> at runtime.
>>>>>>>
>>>>>>> This structure allows function-level / domain-= level parallelism.
>>>>>>> It is very effective in the robot world and I = think it might be a
>>>>>>> good structuring mechanism to allow computatio= nal mathematics
>>>>>>> to take advantage of multiple processors in a = disciplined fashion.
>>>>>>>
>>>>>>> Axiom has a thousand domains and each could ru= n on its own core.
>>>>>>>
>>>>>>> In addition. notice that each domain is indepe= ndent of the others.
>>>>>>> So if we want to use BLAS Fortran code, it cou= ld just be another
>>>>>>> service node. In fact, any "foreign funct= ion" could transparently
>>>>>>> cooperate in a distributed Axiom.
>>>>>>>
>>>>>>> Another key feature is that proofs can be &quo= t;by node".
>>>>>>>
>>>>>>> Tim
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On 6/5/21, Tim Daly <axiomcas@gmail.com> wrote:
>>>>>>>> Axiom is based on first-class dependent ty= pes. Deciding when
>>>>>>>> two types are equivalent may involve compu= tation. See
>>>>>>>> Christiansen, David Thrane "Checking = Dependent Types with
>>>>>>>> Normalization by Evaluation" (2019) >>>>>>>>
>>>>>>>> This puts an interesting constraint on bui= lding types. The
>>>>>>>> constructed types has to export a function= to decide if a
>>>>>>>> given type is "equivalent" to it= self.
>>>>>>>>
>>>>>>>> The notion of "equivalence" migh= t involve category ideas
>>>>>>>> of natural transformation and univalence. = Sigh.
>>>>>>>>
>>>>>>>> That's an interesting design point. >>>>>>>>
>>>>>>>> Tim
>>>>>>>>
>>>>>>>>
>>>>>>>> On 5/5/21, Tim Daly <axiomcas@gmail.com> wrote:
>>>>>>>>> It is interesting that programmer'= s eyes and expectations adapt
>>>>>>>>> to the tools they use. For instance, I= use emacs and expect to
>>>>>>>>> work directly in files and multiple bu= ffers. When I try to use one
>>>>>>>>> of the many IDE tools I find they tend= to "get in the way". I
>>>>>>>>> already
>>>>>>>>> know or can quickly find whatever they= try to tell me. If you use
>>>>>>>>> an
>>>>>>>>> IDE you probably find emacs "too = sparse" for programming.
>>>>>>>>>
>>>>>>>>> Recently I've been working in a sp= arse programming environment.
>>>>>>>>> I'm exploring the question of runn= ing a proof checker in an FPGA.
>>>>>>>>> The FPGA development tools are painful= at best and not intuitive
>>>>>>>>> since you SEEM to be programming but y= ou're actually describing
>>>>>>>>> hardware gates, connections, and timin= g. This is an environment
>>>>>>>>> where everything happens all-at-once a= nd all-the-time (like the
>>>>>>>>> circuits in your computer). It is the = "assembly language of
>>>>>>>>> circuits".
>>>>>>>>> Naturally, my eyes have adapted to thi= s rather raw level.
>>>>>>>>>
>>>>>>>>> That said, I'm normally doing lite= rate programming all the time.
>>>>>>>>> My typical file is a document which is= a mixture of latex and
>>>>>>>>> lisp.
>>>>>>>>> It is something of a shock to return t= o that world. It is clear
>>>>>>>>> why
>>>>>>>>> people who program in Python find lisp= to be a "sea of parens".
>>>>>>>>> Yet as a lisp programmer, I don't = even see the parens, just code.
>>>>>>>>>
>>>>>>>>> It takes a few minutes in a literate d= ocument to adapt vision to
>>>>>>>>> see the latex / lisp combination as na= tural. The latex markup,
>>>>>>>>> like the lisp parens, eventually just = disappears. What remains
>>>>>>>>> is just lisp and natural language text= .
>>>>>>>>>
>>>>>>>>> This seems painful at first but eyes q= uickly adapt. The upside
>>>>>>>>> is that there is always a "finish= ed" document that describes the
>>>>>>>>> state of the code. The overhead of wri= ting a paragraph to
>>>>>>>>> describe a new function or change a pa= ragraph to describe the
>>>>>>>>> changed function is very small.
>>>>>>>>>
>>>>>>>>> Using a Makefile I latex the document = to generate a current PDF
>>>>>>>>> and then I extract, load, and execute = the code. This loop catches
>>>>>>>>> errors in both the latex and the sourc= e code. Keeping an open file
>>>>>>>>> in
>>>>>>>>> my pdf viewer shows all of the changes= in the document after every
>>>>>>>>> run of make. That way I can edit the b= ook as easily as the code.
>>>>>>>>>
>>>>>>>>> Ultimately I find that writing the boo= k while writing the code is
>>>>>>>>> more productive. I don't have to r= emember why I wrote something
>>>>>>>>> since the explanation is already there= .
>>>>>>>>>
>>>>>>>>> We all have our own way of programming= and our own tools.
>>>>>>>>> But I find literate programming to be = a real advance over IDE
>>>>>>>>> style programming and "raw code&q= uot; programming.
>>>>>>>>>
>>>>>>>>> Tim
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On 2/27/21, Tim Daly <axiomcas@gmail.com> wrote= :
>>>>>>>>>> The systems I use have the interes= ting property of
>>>>>>>>>> "Living within the compiler&q= uot;.
>>>>>>>>>>
>>>>>>>>>> Lisp, Forth, Emacs, and other syst= ems that present themselves
>>>>>>>>>> through the Read-Eval-Print-Loop (= REPL) allow the
>>>>>>>>>> ability to deeply interact with th= e system, shaping it to your
>>>>>>>>>> need.
>>>>>>>>>>
>>>>>>>>>> My current thread of study is soft= ware architecture. See
>>>>>>>>>> https://www.youtube.com/watch?v=3DW2hagw1VhhI&feature=3Dyoutu.= be
>>>>>>>>>> and https://www.geor= gefairbanks.com/videos/
>>>>>>>>>>
>>>>>>>>>> My current thinking on SANE involv= es the ability to
>>>>>>>>>> dynamically define categories, rep= resentations, and functions
>>>>>>>>>> along with "composition funct= ions" that permits choosing a
>>>>>>>>>> combination at the time of use. >>>>>>>>>>
>>>>>>>>>> You might want a domain for handli= ng polynomials. There are
>>>>>>>>>> a lot of choices, depending on you= r use case. You might want
>>>>>>>>>> different representations. For exa= mple, you might want dense,
>>>>>>>>>> sparse, recursive, or "machin= e compatible fixnums" (e.g. to
>>>>>>>>>> interface with C code). If these d= on't exist it ought to be
>>>>>>>>>> possible
>>>>>>>>>> to create them. Such "lego-li= ke" building blocks require careful
>>>>>>>>>> thought about creating "fully= factored" objects.
>>>>>>>>>>
>>>>>>>>>> Given that goal, the traditional b= arrier of "compiler" vs
>>>>>>>>>> "interpreter"
>>>>>>>>>> does not seem useful. It is better= to "live within the compiler"
>>>>>>>>>> which
>>>>>>>>>> gives the ability to define new th= ings "on the fly".
>>>>>>>>>>
>>>>>>>>>> Of course, the SANE compiler is go= ing to want an associated
>>>>>>>>>> proof of the functions you create = along with the other parts
>>>>>>>>>> such as its category hierarchy and= representation properties.
>>>>>>>>>>
>>>>>>>>>> There is no such thing as a simple= job. :-)
>>>>>>>>>>
>>>>>>>>>> Tim
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On 2/18/21, Tim Daly <axiomcas@gmail.com>= wrote:
>>>>>>>>>>> The Axiom SANE compiler / inte= rpreter has a few design points.
>>>>>>>>>>>
>>>>>>>>>>> 1) It needs to mix interpreted= and compiled code in the same
>>>>>>>>>>> function.
>>>>>>>>>>> SANE allows dynamic constructi= on of code as well as dynamic type
>>>>>>>>>>> construction at runtime. Both = of these can occur in a runtime
>>>>>>>>>>> object.
>>>>>>>>>>> So there is potentially a mixt= ure of interpreted and compiled
>>>>>>>>>>> code.
>>>>>>>>>>>
>>>>>>>>>>> 2) It needs to perform type re= solution at compile time without
>>>>>>>>>>> overhead
>>>>>>>>>>> where possible. Since this is = not always possible there needs to
>>>>>>>>>>> be
>>>>>>>>>>> a "prefix thunk" tha= t will perform the resolution. Trivially,
>>>>>>>>>>> for
>>>>>>>>>>> example,
>>>>>>>>>>> if we have a + function we nee= d to type-resolve the arguments.
>>>>>>>>>>>
>>>>>>>>>>> However, if we can prove at co= mpile time that the types are both
>>>>>>>>>>> bounded-NNI and the result is = bounded-NNI (i.e. fixnum in lisp)
>>>>>>>>>>> then we can inline a call to += at runtime. If not, we might have
>>>>>>>>>>> + applied to NNI and POLY(FLOA= T), which requires a thunk to
>>>>>>>>>>> resolve types. The thunk could= even "specialize and compile"
>>>>>>>>>>> the code before executing it.<= br> >>>>>>>>>>>
>>>>>>>>>>> It turns out that the Forth im= plementation of
>>>>>>>>>>> "threaded-interpreted&quo= t;
>>>>>>>>>>> languages model provides an ef= ficient and effective way to do
>>>>>>>>>>> this.[0]
>>>>>>>>>>> Type resolution can be "i= nserted" in intermediate thunks.
>>>>>>>>>>> The model also supports dynami= c overloading and tail recursion.
>>>>>>>>>>>
>>>>>>>>>>> Combining high-level CLOS code= with low-level threading gives an
>>>>>>>>>>> easy to understand and robust = design.
>>>>>>>>>>>
>>>>>>>>>>> Tim
>>>>>>>>>>>
>>>>>>>>>>> [0] Loeliger, R.G. "Threa= ded Interpretive Languages" (1981)
>>>>>>>>>>> ISBN 0-07-038360-X
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On 2/5/21, Tim Daly <axiomcas@gmail.com>= ; wrote:
>>>>>>>>>>>> I've worked hard to ma= ke Axiom depend on almost no other
>>>>>>>>>>>> tools so that it would not= get caught by "code rot" of
>>>>>>>>>>>> libraries.
>>>>>>>>>>>>
>>>>>>>>>>>> However, I'm also tryi= ng to make the new SANE version much
>>>>>>>>>>>> easier to understand and d= ebug.To that end I've been
>>>>>>>>>>>> experimenting
>>>>>>>>>>>> with some ideas.
>>>>>>>>>>>>
>>>>>>>>>>>> It should be possible to v= iew source code, of course. But the
>>>>>>>>>>>> source
>>>>>>>>>>>> code is not the only, nor = possibly the best, representation of
>>>>>>>>>>>> the
>>>>>>>>>>>> ideas.
>>>>>>>>>>>> In particular, source code= gets compiled into data structures.
>>>>>>>>>>>> In
>>>>>>>>>>>> Axiom
>>>>>>>>>>>> these data structures real= ly are a graph of related structures.
>>>>>>>>>>>>
>>>>>>>>>>>> For example, looking at th= e gcd function from NNI, there is the
>>>>>>>>>>>> representation of the gcd = function itself. But there is also a
>>>>>>>>>>>> structure
>>>>>>>>>>>> that is the REP (and, in t= he new system, is separate from the
>>>>>>>>>>>> domain).
>>>>>>>>>>>>
>>>>>>>>>>>> Further, there are associa= ted specification and proof
>>>>>>>>>>>> structures.
>>>>>>>>>>>> Even
>>>>>>>>>>>> further, the domain inheri= ts the category structures, and from
>>>>>>>>>>>> those
>>>>>>>>>>>> it
>>>>>>>>>>>> inherits logical axioms an= d definitions through the proof
>>>>>>>>>>>> structure.
>>>>>>>>>>>>
>>>>>>>>>>>> Clearly the gcd function i= s a node in a much larger graph
>>>>>>>>>>>> structure.
>>>>>>>>>>>>
>>>>>>>>>>>> When trying to decide why = code won't compile it would be useful
>>>>>>>>>>>> to
>>>>>>>>>>>> be able to see and walk th= ese structures. I've thought about
>>>>>>>>>>>> using
>>>>>>>>>>>> the
>>>>>>>>>>>> browser but browsers are t= oo weak. Either everything has to be
>>>>>>>>>>>> "in
>>>>>>>>>>>> a
>>>>>>>>>>>> single tab to show the gra= ph" or "the nodes of the graph are in
>>>>>>>>>>>> different
>>>>>>>>>>>> tabs". Plus, construc= ting dynamic graphs that change as the
>>>>>>>>>>>> software
>>>>>>>>>>>> changes (e.g. by loading a= new spad file or creating a new
>>>>>>>>>>>> function)
>>>>>>>>>>>> represents the huge proble= m of keeping the browser "in sync
>>>>>>>>>>>> with
>>>>>>>>>>>> the
>>>>>>>>>>>> Axiom workspace". So = something more dynamic and embedded is
>>>>>>>>>>>> needed.
>>>>>>>>>>>>
>>>>>>>>>>>> Axiom source gets compiled= into CLOS data structures. Each of
>>>>>>>>>>>> these
>>>>>>>>>>>> new SANE structures has an= associated surface representation,
>>>>>>>>>>>> so
>>>>>>>>>>>> they
>>>>>>>>>>>> can be presented in user-f= riendly form.
>>>>>>>>>>>>
>>>>>>>>>>>> Also, since Axiom is liter= ate software, it should be possible
>>>>>>>>>>>> to
>>>>>>>>>>>> look
>>>>>>>>>>>> at
>>>>>>>>>>>> the code in its literate f= orm with the surrounding explanation.
>>>>>>>>>>>>
>>>>>>>>>>>> Essentially we'd like = to have the ability to "deep dive" into
>>>>>>>>>>>> the
>>>>>>>>>>>> Axiom
>>>>>>>>>>>> workspace, not only for de= bugging, but also for understanding
>>>>>>>>>>>> what
>>>>>>>>>>>> functions are used, where = they come from, what they inherit,
>>>>>>>>>>>> and
>>>>>>>>>>>> how they are used in a com= putation.
>>>>>>>>>>>>
>>>>>>>>>>>> To that end I'm lookin= g at using McClim, a lisp windowing
>>>>>>>>>>>> system.
>>>>>>>>>>>> Since the McClim windows w= ould be part of the lisp image, they
>>>>>>>>>>>> have
>>>>>>>>>>>> access to display (and mod= ify) the Axiom workspace at all
>>>>>>>>>>>> times.
>>>>>>>>>>>>
>>>>>>>>>>>> The only hesitation is tha= t McClim uses quicklisp and drags in
>>>>>>>>>>>> a
>>>>>>>>>>>> lot
>>>>>>>>>>>> of other subsystems. It= 9;s all lisp, of course.
>>>>>>>>>>>>
>>>>>>>>>>>> These ideas aren't new= . They were available on Symbolics
>>>>>>>>>>>> machines,
>>>>>>>>>>>> a truly productive platfor= m and one I sorely miss.
>>>>>>>>>>>>
>>>>>>>>>>>> Tim
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> On 1/19/21, Tim Daly <<= a href=3D"mailto:axiomcas@gmail.com" target=3D"_blank">axiomcas@gmail.com> wrote:
>>>>>>>>>>>>> Also of interest is th= e talk
>>>>>>>>>>>>> "The Unreasonable= Effectiveness of Dynamic Typing for
>>>>>>>>>>>>> Practical
>>>>>>>>>>>>> Programs"
>>>>>>>>>>>>> https://vimeo.com/743= 54480
>>>>>>>>>>>>> which questions whethe= r static typing really has any benefit.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Tim
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> On 1/19/21, Tim Daly &= lt;axiomcas@gmail.c= om> wrote:
>>>>>>>>>>>>>> Peter Naur wrote a= n article of interest:
>>>>>>>>>>>>>> htt= p://pages.cs.wisc.edu/~remzi/Naur.pdf
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> In particular, it = mirrors my notion that Axiom needs
>>>>>>>>>>>>>> to embrace literat= e programming so that the "theory
>>>>>>>>>>>>>> of the problem&quo= t; is presented as well as the "theory
>>>>>>>>>>>>>> of the solution&qu= ot;. I quote the introduction:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> This article is, t= o my mind, the most accurate account
>>>>>>>>>>>>>> of what goes on in= designing and coding a program.
>>>>>>>>>>>>>> I refer to it regu= larly when discussing how much
>>>>>>>>>>>>>> documentation to c= reate, how to pass along tacit
>>>>>>>>>>>>>> knowledge, and the= value of the XP's metaphor-setting
>>>>>>>>>>>>>> exercise. It also = provides a way to examine a methodolgy's
>>>>>>>>>>>>>> economic structure= .
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> In the article, wh= ich follows, note that the quality of the
>>>>>>>>>>>>>> designing programm= er's work is related to the quality of
>>>>>>>>>>>>>> the match between = his theory of the problem and his theory
>>>>>>>>>>>>>> of the solution. N= ote that the quality of a later
>>>>>>>>>>>>>> programmer's >>>>>>>>>>>>>> work is related to= the match between his theories and the
>>>>>>>>>>>>>> previous programme= r's theories.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Using Naur's i= deas, the designer's job is not to pass along
>>>>>>>>>>>>>> "the design&q= uot; but to pass along "the theories" driving the
>>>>>>>>>>>>>> design.
>>>>>>>>>>>>>> The latter goal is= more useful and more appropriate. It also
>>>>>>>>>>>>>> highlights that kn= owledge of the theory is tacit in the
>>>>>>>>>>>>>> owning,
>>>>>>>>>>>>>> and
>>>>>>>>>>>>>> so passing along t= he thoery requires passing along both
>>>>>>>>>>>>>> explicit
>>>>>>>>>>>>>> and tacit knowledg= e.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Tim
>>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>
--00000000000091c88405dadc582b-- From MAILER-DAEMON Sun Mar 27 03:10:24 2022 Received: from list by lists.gnu.org with archive (Exim 4.90_1) id 1nYN2d-0006hR-Ew for mharc-axiom-developer@gnu.org; Sun, 27 Mar 2022 03:10:23 -0400 Received: from eggs.gnu.org ([209.51.188.92]:56382) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1nYN2a-0006fv-6S for axiom-developer@nongnu.org; Sun, 27 Mar 2022 03:10:20 -0400 Received: from granite.opuoq.com ([173.255.225.225]:53096) by eggs.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1nYN2R-0007hq-6e for axiom-developer@nongnu.org; Sun, 27 Mar 2022 03:10:19 -0400 Received: from [10.8.6.13] (unknown [71.255.115.162]) by granite.opuoq.com (Postfix) with ESMTPSA id 45F1A5F126; Sun, 27 Mar 2022 07:09:57 +0000 (UTC) References: <1601071096648.17321@ccny.cuny.edu> <20200928114929.GA1113@math.uni.wroc.pl> <16078420800 69.93596@ccny.cuny.edu> <1607845528644.76141@ccny.cuny.edu> In-Reply-To: Mime-Version: 1.0 (1.0) Content-Transfer-Encoding: 7bit Content-Type: multipart/alternative; boundary=Apple-Mail-18CBE148-0FFC-47D9-9D95-5D7EFE73CB6B Message-Id: <2CFE9CAF-DD64-4222-9B83-760FB4765FAB@opuoq.com> Cc: axiom-dev , Tim Daly X-Mailer: iPad Mail (9B206) From: bondo Subject: Re: Axiom musings... Date: Sun, 27 Mar 2022 03:10:14 -0400 To: Tim Daly Received-SPF: pass client-ip=173.255.225.225; envelope-from=bondo@opuoq.com; helo=granite.opuoq.com X-Spam_score_int: -18 X-Spam_score: -1.9 X-Spam_bar: - X-Spam_report: (-1.9 / 5.0 requ) BAYES_00=-1.9, HTML_MESSAGE=0.001, MIME_QP_LONG_LINE=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: axiom-developer@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Axiom Developers List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 27 Mar 2022 07:10:20 -0000 --Apple-Mail-18CBE148-0FFC-47D9-9D95-5D7EFE73CB6B Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset=utf-8 The setup of a spreadsheet is similar to the design of some prototype based p= rogramming languages. I have recently been learning about the Self programmi= ng language, a descendant of Smalltalk, which uses prototypes rather than cl= asses. This creates a more dynamic environment. Like Smalltalk, development is done using a virtual machine with a programmi= ng environment. The objects in the Self environment can be configured using d= irect manipulation to define behavior and inheritance. https://selflanguage.org/ It should be possible to develop a similar prototype system based in lisp, i= f it has not been done already. On Mar 23, 2022, at 1:53, Tim Daly wrote: > I have a deep interest in self-modifying programs. These are > trivial to create in lisp. The question is how to structure a computer > algebra program so it could be dynamically modified but still=20 > well-structured. This is especially important since the user can > create new logical types at runtime. >=20 > One innovation, which I have never seen anywhere, is to structure > the program in a spreadsheet fashion. Spreadsheets cells can > reference other spreadsheet cells. Spreadsheets have a well-defined > evaluation method. This is equivalent to a form of object-oriented > programming where each cell is an object and has a well-defined > inheritance hierarchy through other cells. Simple manipulation > allows insertion, deletion, or modification of these chains at any time. >=20 > So by structuring a computer algebra program like a spreadsheet > I am able to dynamically adjust a program to fit the problem to be > solved, track which cells are needed, and optimize the program. >=20 > Spreadsheets do this naturally but I'm unaware of any non-spreadsheet > program that relies on that organization. >=20 > Tim >=20 >=20 > On Sun, Mar 13, 2022 at 4:01 AM Tim Daly wrote: > Axiom has an awkward 'attributes' category structure. >=20 > In the SANE version it is clear that these attributes are much > closer to logic 'definitions'. As a result one of the changes > is to create a new 'category'-type structure for definitions. > There will be a new keyword, like the category keyword, > 'definition'. >=20 > Tim >=20 >=20 > On Fri, Mar 11, 2022 at 9:46 AM Tim Daly wrote: > The github lockout continues...=20 >=20 > I'm spending some time adding examples to source code. >=20 > Any function can have ++X comments added. These will > appear as examples when the function is )display For example, > in PermutationGroup there is a function 'strongGenerators' > defined as: >=20 > strongGenerators : % -> L PERM S > ++ strongGenerators(gp) returns strong generators for > ++ the group gp. > ++ > ++X S:List(Integer) :=3D [1,2,3,4] > ++X G :=3D symmetricGroup(S) > ++X strongGenerators(G) >=20 >=20 >=20 > Later, in the interpreter we see: >=20 >=20 >=20 >=20 > )d op strongGenerators >=20 > There is one exposed function called strongGenerators : > [1] PermutationGroup(D2) -> List(Permutation(D2)) from > PermutationGroup(D2) > if D2 has SETCAT >=20 > Examples of strongGenerators from PermutationGroup > =20 > S:List(Integer) :=3D [1,2,3,4] > G :=3D symmetricGroup(S) > strongGenerators(G) >=20 >=20 >=20 >=20 > This will show a working example for functions that the > user can copy and use. It is especially useful to show how > to construct working arguments. >=20 > These "example" functions are run at build time when > the make command looks like > make TESTSET=3Dalltests >=20 > I hope to add this documentation to all Axiom functions. >=20 > In addition, the plan is to add these function calls to the > usual test documentation. That means that all of these examples > will be run and show their output in the final distribution > (mnt/ubuntu/doc/src/input/*.dvi files) so the user can view > the expected output. >=20 > Tim >=20 >=20 >=20 >=20 > On Fri, Feb 25, 2022 at 6:05 PM Tim Daly wrote: > It turns out that creating SPAD-looking output is trivial=20 > in Common Lisp. Each class can have a custom print > routine so signatures and ++ comments can each be > printed with their own format. >=20 > To ensure that I maintain compatibility I'll be printing > the categories and domains so they look like SPAD code, > at least until I get the proof technology integrated. I will > probably specialize the proof printers to look like the > original LEAN proof syntax. >=20 > Internally, however, it will all be Common Lisp. >=20 > Common Lisp makes so many desirable features so easy. > It is possible to trace dynamically at any level. One could > even write a trace that showed how Axiom arrived at the > solution. Any domain could have special case output syntax > without affecting any other domain so one could write a > tree-like output for proofs. Using greek characters is trivial > so the input and output notation is more mathematical. >=20 > Tim >=20 >=20 > On Thu, Feb 24, 2022 at 10:24 AM Tim Daly wrote: > Axiom's SPAD code compiles to Common Lisp. > The AKCL version of Common Lisp compiles to C. > Three languages and 2 compilers is a lot to maintain. > Further, there are very few people able to write SPAD > and even fewer people able to maintain it. >=20 > I've decided that the SANE version of Axiom will be=20 > implemented in pure Common Lisp. I've outlined Axiom's > category / type hierarchy in the Common Lisp Object > System (CLOS). I am now experimenting with re-writing > the functions into Common Lisp. >=20 > This will have several long-term effects. It simplifies > the implementation issues. SPAD code blocks a lot of > actions and optimizations that Common Lisp provides. > The Common Lisp language has many more people > who can read, modify, and maintain code. It provides for > interoperability with other Common Lisp projects with > no effort. Common Lisp is an international standard > which ensures that the code will continue to run. >=20 > The input / output mathematics will remain the same. > Indeed, with the new generalizations for first-class > dependent types it will be more general. >=20 > This is a big change, similar to eliminating BOOT code > and moving to Literate Programming. This will provide a > better platform for future research work. Current research > is focused on merging Axiom's computer algebra mathematics > with Lean's proof language. The goal is to create a system for > "computational mathematics". >=20 > Research is the whole point of Axiom. >=20 > Tim >=20 >=20 >=20 > On Sat, Jan 22, 2022 at 9:16 PM Tim Daly wrote: > I can't stress enough how important it is to listen to Hamming's talk > https://www.youtube.com/watch?v=3Da1zDuOPkMSw >=20 > Axiom will begin to die the day I stop working on it. >=20 > However, proving Axiom correct "down to the metal", is fundamental. > It will merge computer algebra and logic, spawning years of new > research. >=20 > Work on fundamental problems. >=20 > Tim >=20 >=20 > On Thu, Dec 30, 2021 at 6:46 PM Tim Daly wrote: > One of the interesting questions when obtaining a result > is "what functions were called and what was their return value?" > Otherwise known as the "show your work" idea. >=20 > There is an idea called the "writer monad" [0], usually=20 > implemented to facilitate logging. We can exploit this > idea to provide "show your work" capability. Each function > can provide this information inside the monad enabling the > question to be answered at any time. >=20 > For those unfamiliar with the monad idea, the best explanation > I've found is this video [1]. >=20 > Tim >=20 > [0] Deriving the writer monad from first principles > https://williamyaoh.com/posts/2020-07-26-deriving-writer-monad.html >=20 > [1] The Absolute Best Intro to Monads for Software Engineers > https://www.youtube.com/watch?v=3DC2w45qRc3aU >=20 > On Mon, Dec 13, 2021 at 12:30 AM Tim Daly wrote: > ...(snip)... >=20 > Common Lisp has an "open compiler". That allows the ability > to deeply modify compiler behavior using compiler macros > and macros in general. CLOS takes advantage of this to add > typed behavior into the compiler in a way that ALLOWS strict > typing such as found in constructive type theory and ML. > Judgments, ala Crary, are front-and-center. >=20 > Whether you USE the discipline afforded is the real question. >=20 > Indeed, the Axiom research struggle is essentially one of how > to have a disciplined use of first-class dependent types. The > struggle raises issues of, for example, compiling a dependent > type whose argument is recursive in the compiled type. Since > the new type is first-class it can be constructed at what you > improperly call "run-time". However, it appears that the recursive > type may have to call the compiler at each recursion to generate > the next step since in some cases it cannot generate "closed code". >=20 > I am embedding proofs (in LEAN language) into the type > hierarchy so that theorems, which depend on the type hierarchy, > are correctly inherited. The compiler has to check the proofs of functions= > at compile time using these. Hacking up nonsense just won't cut it. Think > of the problem of embedding LEAN proofs in ML or ML in LEAN. > (Actually, Jeremy Avigad might find that research interesting.) >=20 > So Matrix(3,3,Float) has inverses (assuming Float is a > field (cough)). The type inherits this theorem and proofs of > functions can use this. But Matrix(3,4,Integer) does not have > inverses so the proofs cannot use this. The type hierarchy has > to ensure that the proper theorems get inherited. >=20 > Making proof technology work at compile time is hard. > (Worse yet, LEAN is a moving target. Sigh.) >=20 >=20 >=20 > On Thu, Nov 25, 2021 at 9:43 AM Tim Daly wrote: >=20 > As you know I've been re-architecting Axiom to use first class > dependent types and proving the algorithms correct. For example, > the GCD of natural numbers or the GCD of polynomials. >=20 > The idea involves "boxing up" the proof with the algorithm (aka > proof carrying code) in the ELF file (under a crypto hash so it > can't be changed). >=20 > Once the code is running on the CPU, the proof is run in parallel > on the field programmable gate array (FPGA). Intel data center > servers have CPUs with built-in FPGAs these days. >=20 > There is a bit of a disconnect, though. The GCD code is compiled > machine code but the proof is LEAN-level. >=20 > What would be ideal is if the compiler not only compiled the GCD > code to machine code, it also compiled the proof to "machine code". > That is, for each machine instruction, the FPGA proof checker > would ensure that the proof was not violated at the individual > instruction level. >=20 > What does it mean to "compile a proof to the machine code level"? >=20 > The Milawa effort (Myre14.pdf) does incremental proofs in layers. > To quote from the article [0]: >=20 > We begin with a simple proof checker, call it A, which is short > enough to verify by the ``social process'' of mathematics -- and > more recently with a theorem prover for a more expressive logic. >=20 > We then develop a series of increasingly powerful proof checkers, > call the B, C, D, and so on. We show each of these programs only > accepts the same formulas as A, using A to verify B, and B to verify > C, and so on. Then, since we trust A, and A says B is trustworthy, we > can trust B. Then, since we trust B, and B says C is trustworthy, we > can trust C.=20 >=20 > This gives a technique for "compiling the proof" down the the machine > code level. Ideally, the compiler would have judgments for each step of > the compilation so that each compile step has a justification. I don't > know of any compiler that does this yet. (References welcome). >=20 > At the machine code level, there are techniques that would allow > the FPGA proof to "step in sequence" with the executing code. > Some work has been done on using "Hoare Logic for Realistically > Modelled Machine Code" (paper attached, Myre07a.pdf), > "Decompilation into Logic -- Improved (Myre12a.pdf). >=20 > So the game is to construct a GCD over some type (Nats, Polys, etc. > Axiom has 22), compile the dependent type GCD to machine code. > In parallel, the proof of the code is compiled to machine code. The > pair is sent to the CPU/FPGA and, while the algorithm runs, the FPGA > ensures the proof is not violated, instruction by instruction. >=20 > (I'm ignoring machine architecture issues such pipelining, out-of-order, > branch prediction, and other machine-level things to ponder. I'm looking > at the RISC-V Verilog details by various people to understand better but > it is still a "misty fog" for me.) >=20 > The result is proven code "down to the metal". >=20 > Tim >=20 >=20 >=20 > [0] https://www.cs.utexas.edu/users/moore/acl2/manuals/current/manual/inde= x-seo.php/ACL2____MILAWA >=20 > On Thu, Nov 25, 2021 at 6:05 AM Tim Daly wrote: >=20 > As you know I've been re-architecting Axiom to use first class > dependent types and proving the algorithms correct. For example, > the GCD of natural numbers or the GCD of polynomials. >=20 > The idea involves "boxing up" the proof with the algorithm (aka > proof carrying code) in the ELF file (under a crypto hash so it > can't be changed). >=20 > Once the code is running on the CPU, the proof is run in parallel > on the field programmable gate array (FPGA). Intel data center > servers have CPUs with built-in FPGAs these days. >=20 > There is a bit of a disconnect, though. The GCD code is compiled > machine code but the proof is LEAN-level. >=20 > What would be ideal is if the compiler not only compiled the GCD > code to machine code, it also compiled the proof to "machine code". > That is, for each machine instruction, the FPGA proof checker > would ensure that the proof was not violated at the individual > instruction level. >=20 > What does it mean to "compile a proof to the machine code level"? >=20 > The Milawa effort (Myre14.pdf) does incremental proofs in layers. > To quote from the article [0]: >=20 > We begin with a simple proof checker, call it A, which is short > enough to verify by the ``social process'' of mathematics -- and > more recently with a theorem prover for a more expressive logic. >=20 > We then develop a series of increasingly powerful proof checkers, > call the B, C, D, and so on. We show each of these programs only > accepts the same formulas as A, using A to verify B, and B to verify > C, and so on. Then, since we trust A, and A says B is trustworthy, we > can trust B. Then, since we trust B, and B says C is trustworthy, we > can trust C.=20 >=20 > This gives a technique for "compiling the proof" down the the machine > code level. Ideally, the compiler would have judgments for each step of > the compilation so that each compile step has a justification. I don't > know of any compiler that does this yet. (References welcome). >=20 > At the machine code level, there are techniques that would allow > the FPGA proof to "step in sequence" with the executing code. > Some work has been done on using "Hoare Logic for Realistically > Modelled Machine Code" (paper attached, Myre07a.pdf), > "Decompilation into Logic -- Improved (Myre12a.pdf). >=20 > So the game is to construct a GCD over some type (Nats, Polys, etc. > Axiom has 22), compile the dependent type GCD to machine code. > In parallel, the proof of the code is compiled to machine code. The > pair is sent to the CPU/FPGA and, while the algorithm runs, the FPGA > ensures the proof is not violated, instruction by instruction. >=20 > (I'm ignoring machine architecture issues such pipelining, out-of-order, > branch prediction, and other machine-level things to ponder. I'm looking > at the RISC-V Verilog details by various people to understand better but > it is still a "misty fog" for me.) >=20 > The result is proven code "down to the metal". >=20 > Tim >=20 >=20 >=20 > [0] https://www.cs.utexas.edu/users/moore/acl2/manuals/current/manual/inde= x-seo.php/ACL2____MILAWA >=20 > On Sat, Nov 13, 2021 at 5:28 PM Tim Daly wrote: > Full support for general, first-class dependent types requires > some changes to the Axiom design. That implies some language > design questions. >=20 > Given that mathematics is such a general subject with a lot of > "local" notation and ideas (witness logical judgment notation) > careful thought is needed to design a language that is able to > handle a wide range. >=20 > Normally language design is a two-level process. The language > designer creates a language and then an implementation. Various > design choices affect the final language. >=20 > There is "The Metaobject Protocol" (MOP) > https://www.amazon.com/Art-Metaobject-Protocol-Gregor-Kiczales/dp/02626107= 44 > which encourages a three-level process. The language designer=20 > works at a Metalevel to design a family of languages, then the > language specializations, then the implementation. A MOP design > allows the language user to optimize the language to their problem. >=20 > A simple paper on the subject is "Metaobject Protocols" > https://users.cs.duke.edu/~vahdat/ps/mop.pdf >=20 > Tim >=20 >=20 > On Mon, Oct 25, 2021 at 7:42 PM Tim Daly wrote: > I have a separate thread of research on Self-Replicating Systems > (ref: Kinematics of Self Reproducing Machines > http://www.molecularassembler.com/KSRM.htm) >=20 > which led to watching "Strange Dreams of Stranger Loops" by Will Byrd > https://www.youtube.com/watch?v=3DAffW-7ika0E >=20 > Will referenced a PhD Thesis by Jon Doyle > "A Model for Deliberation, Action, and Introspection" >=20 > I also read the thesis by J.C.G. Sturdy > "A Lisp through the Looking Glass" >=20 > Self-replication requires the ability to manipulate your own > representation in such a way that changes to that representation > will change behavior. >=20 > This leads to two thoughts in the SANE research. >=20 > First, "Declarative Representation". That is, most of the things > about the representation should be declarative rather than > procedural. Applying this idea as much as possible makes it > easier to understand and manipulate. >=20 > Second, "Explicit Call Stack". Function calls form an implicit > call stack. This can usually be displayed in a running lisp system. > However, having the call stack explicitly available would mean > that a system could "introspect" at the first-class level. >=20 > These two ideas would make it easy, for example, to let the > system "show the work". One of the normal complaints is that > a system presents an answer but there is no way to know how > that answer was derived. These two ideas make it possible to > understand, display, and even post-answer manipulate > the intermediate steps. >=20 > Having the intermediate steps also allows proofs to be > inserted in a step-by-step fashion. This aids the effort to > have proofs run in parallel with computation at the hardware > level. >=20 > Tim >=20 >=20 >=20 >=20 >=20 >=20 > On Thu, Oct 21, 2021 at 9:50 AM Tim Daly wrote: > So the current struggle involves the categories in Axiom. >=20 > The categories and domains constructed using categories > are dependent types. When are dependent types "equal"? > Well, hummmm, that depends on the arguments to the > constructor. >=20 > But in order to decide(?) equality we have to evaluate > the arguments (which themselves can be dependent types). > Indeed, we may, and in general, we must evaluate the=20 > arguments at compile time (well, "construction time" as > there isn't really a compiler / interpreter separation anymore.) >=20 > That raises the question of what "equality" means. This > is not simply a "set equality" relation. It falls into the > infinite-groupoid of homotopy type theory. In general > it appears that deciding category / domain equivalence > might force us to climb the type hierarchy. >=20 > Beyond that, there is the question of "which proof" > applies to the resulting object. Proofs depend on their > assumptions which might be different for different > constructions. As yet I have no clue how to "index" > proofs based on their assumptions, nor how to=20 > connect these assumptions to the groupoid structure. >=20 > My brain hurts. >=20 > Tim >=20 >=20 > On Mon, Oct 18, 2021 at 2:00 AM Tim Daly wrote: > "Birthing Computational Mathematics" >=20 > The Axiom SANE project is difficult at a very fundamental > level. The title "SANE" was chosen due to the various > words found in a thesuarus... "rational", "coherent", > "judicious" and "sound". >=20 > These are very high level, amorphous ideas. But so is > the design of SANE. Breaking away from tradition in > computer algebra, type theory, and proof assistants > is very difficult. Ideas tend to fall into standard jargon > which limits both the frame of thinking (e.g. dependent > types) and the content (e.g. notation). >=20 > Questioning both frame and content is very difficult. > It is hard to even recognize when they are accepted > "by default" rather than "by choice". What does the idea > "power tools" mean in a primitive, hand labor culture? >=20 > Christopher Alexander [0] addresses this problem in > a lot of his writing. Specifically, in his book "Notes on > the Synthesis of Form", in his chapter 5 "The Selfconsious > Process", he addresses this problem directly. This is a > "must read" book. >=20 > Unlike building design and contruction, however, there > are almost no constraints to use as guides. Alexander > quotes Plato's Phaedrus: >=20 > "First, the taking in of scattered particulars under > one Idea, so that everyone understands what is being > talked about ... Second, the separation of the Idea > into parts, by dividing it at the joints, as nature > directs, not breaking any limb in half as a bad=20 > carver might." >=20 > Lisp, which has been called "clay for the mind" can > build virtually anything that can be thought. The=20 > "joints" are also "of one's choosing" so one is > both carver and "nature". >=20 > Clearly the problem is no longer "the tools". > *I* am the problem constraining the solution. > Birthing this "new thing" is slow, difficult, and > uncertain at best. >=20 > Tim >=20 > [0] Alexander, Christopher "Notes on the Synthesis > of Form" Harvard University Press 1964=20 > ISBN 0-674-62751-2 >=20 >=20 > On Sun, Oct 10, 2021 at 4:40 PM Tim Daly wrote: > Re: writing a paper... I'm not connected to Academia > so anything I'd write would never make it into print. >=20 > "Language level parsing" is still a long way off. The talk > by Guy Steele [2] highlights some of the problems we > currently face using mathematical metanotation. >=20 > For example, a professor I know at CCNY (City College > of New York) didn't understand Platzer's "funny > fraction notation" (proof judgements) despite being > an expert in Platzer's differential equations area. >=20 > Notation matters and is not widely common. >=20 > I spoke to Professor Black (in LTI) about using natural > language in the limited task of a human-robot cooperation > in changing a car tire. I looked at the current machine > learning efforts. They are no where near anything but > toy systems, taking too long to train and are too fragile. >=20 > Instead I ended up using a combination of AIML [3] > (Artificial Intelligence Markup Language), the ALICE > Chatbot [4], Forgy's OPS5 rule based program [5], > and Fahlman's SCONE [6] knowledge base. It was > much less fragile in my limited domain problem. >=20 > I have no idea how to extend any system to deal with > even undergraduate mathematics parsing. >=20 > Nor do I have any idea how I would embed LEAN > knowledge into a SCONE database, although I > think the combination would be useful and interesting. >=20 > I do believe that, in the limited area of computational > mathematics, we are capable of building robust, proven > systems that are quite general and extensible. As you > might have guessed I've given it a lot of thought over > the years :-) >=20 > A mathematical language seems to need >6 components >=20 > 1) We need some sort of a specification language, possibly > somewhat 'propositional' that introduces the assumptions > you mentioned (ref. your discussion of numbers being > abstract and ref. your discussion of relevant choice of > assumptions related to a problem). >=20 > This is starting to show up in the hardware area (e.g. > Lamport's TLC[0]) >=20 > Of course, specifications relate to proving programs > and, as you recall, I got a cold reception from the > LEAN community about using LEAN for program proofs. >=20 > 2) We need "scaffolding". That is, we need a theory > that can be reduced to some implementable form > that provides concept-level structure. >=20 > Axiom uses group theory for this. Axiom's "category" > structure has "Category" things like Ring. Claiming > to be a Ring brings in a lot of "Signatures" of functions > you have to implement to properly be a Ring. >=20 > Scaffolding provides a firm mathematical basis for > design. It provides a link between the concept of a > Ring and the expectations you can assume when > you claim your "Domain" "is a Ring". Category > theory might provide similar structural scaffolding > (eventually... I'm still working on that thought garden) >=20 > LEAN ought to have a textbook(s?) that structures > the world around some form of mathematics. It isn't > sufficient to say "undergraduate math" is the goal. > There needs to be some coherent organization so > people can bring ideas like Group Theory to the > organization. Which brings me to ... >=20 > 3) We need "spreading". That is, we need to take > the various definitions and theorems in LEAN and > place them in their proper place in the scaffold. >=20 > For example, the Ring category needs the definitions > and theorems for a Ring included in the code for the > Ring category. Similarly, the Commutative category > needs the definitions and theorems that underlie > "commutative" included in the code. >=20 > That way, when you claim to be a "Commutative Ring" > you get both sets of definitions and theorems. That is, > the inheritance mechanism will collect up all of the > definitions and theorems and make them available > for proofs. >=20 > I am looking at LEAN's definitions and theorems with > an eye to "spreading" them into the group scaffold of > Axiom. >=20 > 4) We need "carriers" (Axiom calls them representations, > aka "REP"). REPs allow data structures to be defined > independent of the implementation. >=20 > For example, Axiom can construct Polynomials that > have their coefficients in various forms of representation. > You can define "dense" (all coefficients in a list), > "sparse" (only non-zero coefficients), "recursive", etc. >=20 > A "dense polynomial" and a "sparse polynomial" work > exactly the same way as far as the user is concerned. > They both implement the same set of functions. There > is only a difference of representation for efficiency and > this only affects the implementation of the functions, > not their use. >=20 > Axiom "got this wrong" because it didn't sufficiently > separate the REP from the "Domain". I plan to fix this. >=20 > LEAN ought to have a "data structures" subtree that > has all of the definitions and axioms for all of the > existing data structures (e.g. Red-Black trees). This > would be a good undergraduate project. >=20 > 5) We need "Domains" (in Axiom speak). That is, we > need a box that holds all of the functions that implement > a "Domain". For example, a "Polynomial Domain" would > hold all of the functions for manipulating polynomials > (e.g polynomial multiplication). The "Domain" box > is a dependent type that: >=20 > A) has an argument list of "Categories" that this "Domain" > box inherits. Thus, the "Integer Domain" inherits > the definitions and axioms from "Commutative" >=20 > Functions in the "Domain" box can now assume > and use the properties of being commutative. Proofs > of functions in this domain can use the definitions > and proofs about being commutative. >=20 > B) contains an argument that specifies the "REP" > (aka, the carrier). That way you get all of the > functions associated with the data structure > available for use in the implementation. >=20 > Functions in the Domain box can use all of > the definitions and axioms about the representation > (e.g. NonNegativeIntegers are always positive) >=20 > C) contains local "spread" definitions and axioms > that can be used in function proofs. >=20 > For example, a "Square Matrix" domain would > have local axioms that state that the matrix is > always square. Thus, functions in that box could > use these additional definitions and axioms in > function proofs. >=20 > D) contains local state. A "Square Matrix" domain > would be constructed as a dependent type that > specified the size of the square (e.g. a 2x2 > matrix would have '2' as a dependent parameter. >=20 > E) contains implementations of inherited functions. >=20 > A "Category" could have a signature for a GCD > function and the "Category" could have a default > implementation. However, the "Domain" could > have a locally more efficient implementation which > overrides the inherited implementation. >=20 > Axiom has about 20 GCD implementations that > differ locally from the default in the category. They > use properties known locally to be more efficient. >=20 > F) contains local function signatures. >=20 > A "Domain" gives the user more and more unique > functions. The signature have associated > "pre- and post- conditions" that can be used > as assumptions in the function proofs. >=20 > Some of the user-available functions are only > visible if the dependent type would allow them > to exist. For example, a general Matrix domain > would have fewer user functions that a Square > Matrix domain. >=20 > In addition, local "helper" functions need their > own signatures that are not user visible. >=20 > G) the function implementation for each signature. >=20 > This is obviously where all the magic happens >=20 > H) the proof of each function. >=20 > This is where I'm using LEAN. >=20 > Every function has a proof. That proof can use > all of the definitions and axioms inherited from > the "Category", "Representation", the "Domain > Local", and the signature pre- and post- > conditions. >=20 > I) literature links. Algorithms must contain a link > to at least one literature reference. Of course, > since everything I do is a Literate Program > this is obviously required. Knuth said so :-) >=20 >=20 > LEAN ought to have "books" or "pamphlets" that > bring together all of this information for a domain > such as Square Matrices. That way a user can > find all of the related ideas, available functions, > and their corresponding proofs in one place. >=20 > 6) User level presentation. >=20 > This is where the systems can differ significantly. > Axiom and LEAN both have GCD but they use > that for different purposes. >=20 > I'm trying to connect LEAN's GCD and Axiom's GCD > so there is a "computational mathematics" idea that > allows the user to connect proofs and implementations. >=20 > 7) Trust >=20 > Unlike everything else, computational mathematics > can have proven code that gives various guarantees. >=20 > I have been working on this aspect for a while. > I refer to it as trust "down to the metal" The idea is > that a proof of the GCD function and the implementation > of the GCD function get packaged into the ELF format. > (proof carrying code). When the GCD algorithm executes > on the CPU, the GCD proof is run through the LEAN > proof checker on an FPGA in parallel. >=20 > (I just recently got a PYNQ Xilinx board [1] with a CPU > and FPGA together. I'm trying to implement the LEAN > proof checker on the FPGA). >=20 > We are on the cusp of a revolution in computational > mathematics. But the two pillars (proof and computer > algebra) need to get know each other. >=20 > Tim >=20 >=20 >=20 > [0] Lamport, Leslie "Chapter on TLA+" > in "Software Specification Methods" > https://www.springer.com/gp/book/9781852333539 > (I no longer have CMU library access or I'd send you > the book PDF) >=20 > [1] https://www.tul.com.tw/productspynq-z2.html >=20 > [2] https://www.youtube.com/watch?v=3DdCuZkaaou0Q >=20 > [3] "ARTIFICIAL INTELLIGENCE MARKUP LANGUAGE" > https://arxiv.org/pdf/1307.3091.pdf >=20 > [4] ALICE Chatbot > http://www.scielo.org.mx/pdf/cys/v19n4/1405-5546-cys-19-04-00625.pdf >=20 > [5] OPS5 User Manual > https://kilthub.cmu.edu/articles/journal_contribution/OPS5_user_s_manual/6= 608090/1 >=20 > [6] Scott Fahlman "SCONE" > http://www.cs.cmu.edu/~sef/scone/ >=20 > On 9/27/21, Tim Daly wrote: > > I have tried to maintain a list of names of people who have > > helped Axiom, going all the way back to the pre-Scratchpad > > days. The names are listed at the beginning of each book. > > I also maintain a bibliography of publications I've read or > > that have had an indirect influence on Axiom. > > > > Credit is "the coin of the realm". It is easy to share and wrong > > to ignore. It is especially damaging to those in Academia who > > are affected by credit and citations in publications. > > > > Apparently I'm not the only person who feels that way. The ACM > > Turing award seems to have ignored a lot of work: > > > > Scientific Integrity, the 2021 Turing Lecture, and the 2018 Turing > > Award for Deep Learning > > https://people.idsia.ch/~juergen/scientific-integrity-turing-award-deep-= learning.html > > > > I worked on an AI problem at IBM Research called Ketazolam. > > (https://en.wikipedia.org/wiki/Ketazolam). The idea was to recognize > > and associated 3D chemical drawings with their drug counterparts. > > I used Rumelhart, and McClelland's books. These books contained > > quite a few ideas that seem to be "new and innovative" among the > > machine learning crowd... but the books are from 1987. I don't believe > > I've seen these books mentioned in any recent bibliography. > > https://mitpress.mit.edu/books/parallel-distributed-processing-volume-1 > > > > > > > > > > On 9/27/21, Tim Daly wrote: > >> Greg Wilson asked "How Reliable is Scientific Software?" > >> https://neverworkintheory.org/2021/09/25/how-reliable-is-scientific-sof= tware.html > >> > >> which is a really interesting read. For example" > >> > >> [Hatton1994], is now a quarter of a century old, but its conclusions > >> are still fresh. The authors fed the same data into nine commercial > >> geophysical software packages and compared the results; they found > >> that, "numerical disagreement grows at around the rate of 1% in > >> average absolute difference per 4000 fines of implemented code, and, > >> even worse, the nature of the disagreement is nonrandom" (i.e., the > >> authors of different packages make similar mistakes). > >> > >> > >> On 9/26/21, Tim Daly wrote: > >>> I should note that the lastest board I've just unboxed > >>> (a PYNQ-Z2) is a Zynq Z-7020 chip from Xilinx (AMD). > >>> > >>> What makes it interesting is that it contains 2 hard > >>> core processors and an FPGA, connected by 9 paths > >>> for communication. The processors can be run > >>> independently so there is the possibility of a parallel > >>> version of some Axiom algorithms (assuming I had > >>> the time, which I don't). > >>> > >>> Previously either the hard (physical) processor was > >>> separate from the FPGA with minimal communication > >>> or the soft core processor had to be created in the FPGA > >>> and was much slower. > >>> > >>> Now the two have been combined in a single chip. > >>> That means that my effort to run a proof checker on > >>> the FPGA and the algorithm on the CPU just got to > >>> the point where coordination is much easier. > >>> > >>> Now all I have to do is figure out how to program this > >>> beast. > >>> > >>> There is no such thing as a simple job. > >>> > >>> Tim > >>> > >>> > >>> On 9/26/21, Tim Daly wrote: > >>>> I'm familiar with most of the traditional approaches > >>>> like Theorema. The bibliography contains most of the > >>>> more interesting sources. [0] > >>>> > >>>> There is a difference between traditional approaches to > >>>> connecting computer algebra and proofs and my approach. > >>>> > >>>> Proving an algorithm, like the GCD, in Axiom is hard. > >>>> There are many GCDs (e.g. NNI vs POLY) and there > >>>> are theorems and proofs passed at runtime in the > >>>> arguments of the newly constructed domains. This > >>>> involves a lot of dependent type theory and issues of > >>>> compile time / runtime argument evaluation. The issues > >>>> that arise are difficult and still being debated in the type > >>>> theory community. > >>>> > >>>> I am putting the definitions, theorems, and proofs (DTP) > >>>> directly into the category/domain hierarchy. Each category > >>>> will have the DTP specific to it. That way a commutative > >>>> domain will inherit a commutative theorem and a > >>>> non-commutative domain will not. > >>>> > >>>> Each domain will have additional DTPs associated with > >>>> the domain (e.g. NNI vs Integer) as well as any DTPs > >>>> it inherits from the category hierarchy. Functions in the > >>>> domain will have associated DTPs. > >>>> > >>>> A function to be proven will then inherit all of the relevant > >>>> DTPs. The proof will be attached to the function and > >>>> both will be sent to the hardware (proof-carrying code). > >>>> > >>>> The proof checker, running on a field programmable > >>>> gate array (FPGA), will be checked at runtime in > >>>> parallel with the algorithm running on the CPU > >>>> (aka "trust down to the metal"). (Note that Intel > >>>> and AMD have built CPU/FPGA combined chips, > >>>> currently only available in the cloud.) > >>>> > >>>> > >>>> > >>>> I am (slowly) making progress on the research. > >>>> > >>>> I have the hardware and nearly have the proof > >>>> checker from LEAN running on my FPGA. > >>>> > >>>> I'm in the process of spreading the DTPs from > >>>> LEAN across the category/domain hierarchy. > >>>> > >>>> The current Axiom build extracts all of the functions > >>>> but does not yet have the DTPs. > >>>> > >>>> I have to restructure the system, including the compiler > >>>> and interpreter to parse and inherit the DTPs. I > >>>> have some of that code but only some of the code > >>>> has been pushed to the repository (volume 15) but > >>>> that is rather trivial, out of date, and incomplete. > >>>> > >>>> I'm clearly not smart enough to prove the Risch > >>>> algorithm and its associated machinery but the needed > >>>> definitions and theorems will be available to someone > >>>> who wants to try. > >>>> > >>>> [0] https://github.com/daly/PDFS/blob/master/bookvolbib.pdf > >>>> > >>>> > >>>> On 8/19/21, Tim Daly wrote: > >>>>> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > >>>>> > >>>>> REVIEW (Axiom on WSL2 Windows) > >>>>> > >>>>> > >>>>> So the steps to run Axiom from a Windows desktop > >>>>> > >>>>> 1 Windows) install XMing on Windows for X11 server > >>>>> > >>>>> http://www.straightrunning.com/XmingNotes/ > >>>>> > >>>>> 2 WSL2) Install Axiom in WSL2 > >>>>> > >>>>> sudo apt install axiom > >>>>> > >>>>> 3 WSL2) modify /usr/bin/axiom to fix the bug: > >>>>> (someone changed the axiom startup script. > >>>>> It won't work on WSL2. I don't know who or > >>>>> how to get it fixed). > >>>>> > >>>>> sudo emacs /usr/bin/axiom > >>>>> > >>>>> (split the line into 3 and add quote marks) > >>>>> > >>>>> export SPADDEFAULT=3D/usr/local/axiom/mnt/linux > >>>>> export AXIOM=3D/usr/lib/axiom-20170501 > >>>>> export "PATH=3D/usr/lib/axiom-20170501/bin:$PATH" > >>>>> > >>>>> 4 WSL2) create a .axiom.input file to include startup cmds: > >>>>> > >>>>> emacs .axiom.input > >>>>> > >>>>> )cd "/mnt/c/yourpath" > >>>>> )sys pwd > >>>>> > >>>>> 5 WSL2) create a "myaxiom" command that sets the > >>>>> DISPLAY variable and starts axiom > >>>>> > >>>>> emacs myaxiom > >>>>> > >>>>> #! /bin/bash > >>>>> export DISPLAY=3D:0.0 > >>>>> axiom > >>>>> > >>>>> 6 WSL2) put it in the /usr/bin directory > >>>>> > >>>>> chmod +x myaxiom > >>>>> sudo cp myaxiom /usr/bin/myaxiom > >>>>> > >>>>> 7 WINDOWS) start the X11 server > >>>>> > >>>>> (XMing XLaunch Icon on your desktop) > >>>>> > >>>>> 8 WINDOWS) run myaxiom from PowerShell > >>>>> (this should start axiom with graphics available) > >>>>> > >>>>> wsl myaxiom > >>>>> > >>>>> 8 WINDOWS) make a PowerShell desktop > >>>>> > >>>>> https://superuser.com/questions/886951/run-powershell-script-when-yo= u-open-powershell > >>>>> > >>>>> Tim > >>>>> > >>>>> On 8/13/21, Tim Daly wrote: > >>>>>> A great deal of thought is directed toward making the SANE version > >>>>>> of Axiom as flexible as possible, decoupling mechanism from theory.= > >>>>>> > >>>>>> An interesting publication by Brian Cantwell Smith [0], "Reflection= > >>>>>> and Semantics in LISP" seems to contain interesting ideas related > >>>>>> to our goal. Of particular interest is the ability to reason about > >>>>>> and > >>>>>> perform self-referential manipulations. In a dependently-typed > >>>>>> system it seems interesting to be able "adapt" code to handle > >>>>>> run-time computed arguments to dependent functions. The abstract: > >>>>>> > >>>>>> "We show how a computational system can be constructed to > >>>>>> "reason", > >>>>>> effectively > >>>>>> and consequentially, about its own inferential processes. The > >>>>>> analysis proceeds in two > >>>>>> parts. First, we consider the general question of computational > >>>>>> semantics, rejecting > >>>>>> traditional approaches, and arguing that the declarative and > >>>>>> procedural aspects of > >>>>>> computational symbols (what they stand for, and what behaviour > >>>>>> they > >>>>>> engender) should be > >>>>>> analysed independently, in order that they may be coherently > >>>>>> related. Second, we > >>>>>> investigate self-referential behavior in computational processes= , > >>>>>> and show how to embed an > >>>>>> effective procedural model of a computational calculus within th= at > >>>>>> calculus (a model not > >>>>>> unlike a meta-circular interpreter, but connected to the > >>>>>> fundamental operations of the > >>>>>> machine in such a way as to provide, at any point in a > >>>>>> computation, > >>>>>> fully articulated > >>>>>> descriptions of the state of that computation, for inspection an= d > >>>>>> possible modification). In > >>>>>> terms of the theories that result from these investigations, we > >>>>>> present a general architecture > >>>>>> for procedurally reflective processes, able to shift smoothly > >>>>>> between dealing with a given > >>>>>> subject domain, and dealing with their own reasoning processes > >>>>>> over > >>>>>> that domain. > >>>>>> > >>>>>> An instance of the general solution is worked out in the context= > >>>>>> of > >>>>>> an applicative > >>>>>> language. Specifically, we present three successive dialects of > >>>>>> LISP: 1-LISP, a distillation of > >>>>>> current practice, for comparison purposes; 2-LISP, a dialect > >>>>>> constructed in terms of our > >>>>>> rationalised semantics, in which the concept of evaluation is > >>>>>> rejected in favour of > >>>>>> independent notions of simplification and reference, and in whic= h > >>>>>> the respective categories > >>>>>> of notation, structure, semantics, and behaviour are strictly > >>>>>> aligned; and 3-LISP, an > >>>>>> extension of 2-LISP endowed with reflective powers." > >>>>>> > >>>>>> Axiom SANE builds dependent types on the fly. The ability to access= > >>>>>> both the refection > >>>>>> of the tower of algebra and the reflection of the tower of proofs a= t > >>>>>> the time of construction > >>>>>> makes the construction of a new domain or specific algorithm easier= > >>>>>> and more general. > >>>>>> > >>>>>> This is of particular interest because one of the efforts is to bui= ld > >>>>>> "all the way down to the > >>>>>> metal". If each layer is constructed on top of previous proven laye= rs > >>>>>> and the new layer > >>>>>> can "reach below" to lower layers then the tower of layers can be > >>>>>> built without duplication. > >>>>>> > >>>>>> Tim > >>>>>> > >>>>>> [0], Smith, Brian Cantwell "Reflection and Semantics in LISP" > >>>>>> POPL '84: Proceedings of the 11th ACM SIGACT-SIGPLAN > >>>>>> ymposium on Principles of programming languagesJanuary 1 > >>>>>> 984 Pages 23=E2=80=9335https://doi.org/10.1145/800017.800513 > >>>>>> > >>>>>> On 6/29/21, Tim Daly wrote: > >>>>>>> Having spent time playing with hardware it is perfectly clear that= > >>>>>>> future computational mathematics efforts need to adapt to using > >>>>>>> parallel processing. > >>>>>>> > >>>>>>> I've spent a fair bit of time thinking about structuring Axiom to > >>>>>>> be parallel. Most past efforts have tried to focus on making a > >>>>>>> particular algorithm parallel, such as a matrix multiply. > >>>>>>> > >>>>>>> But I think that it might be more effective to make each domain > >>>>>>> run in parallel. A computation crosses multiple domains so a > >>>>>>> particular computation could involve multiple parallel copies. > >>>>>>> > >>>>>>> For example, computing the Cylindrical Algebraic Decomposition > >>>>>>> could recursively decompose the plane. Indeed, any tree-recursive > >>>>>>> algorithm could be run in parallel "in the large" by creating new > >>>>>>> running copies of the domain for each sub-problem. > >>>>>>> > >>>>>>> So the question becomes, how does one manage this? > >>>>>>> > >>>>>>> A similar problem occurs in robotics where one could have multiple= > >>>>>>> wheels, arms, propellers, etc. that need to act independently but > >>>>>>> in coordination. > >>>>>>> > >>>>>>> The robot solution uses ROS2. The three ideas are ROSCORE, > >>>>>>> TOPICS with publish/subscribe, and SERVICES with request/response.= > >>>>>>> These are communication paths defined between processes. > >>>>>>> > >>>>>>> ROS2 has a "roscore" which is basically a phonebook of "topics". > >>>>>>> Any process can create or look up the current active topics. eq: > >>>>>>> > >>>>>>> rosnode list > >>>>>>> > >>>>>>> TOPICS: > >>>>>>> > >>>>>>> Any process can PUBLISH a topic (which is basically a typed data > >>>>>>> structure), e.g the topic /hw with the String data "Hello World". > >>>>>>> eg: > >>>>>>> > >>>>>>> rostopic pub /hw std_msgs/String "Hello, World" > >>>>>>> > >>>>>>> Any process can SUBSCRIBE to a topic, such as /hw, and get a > >>>>>>> copy of the data. eg: > >>>>>>> > >>>>>>> rostopic echo /hw =3D=3D> "Hello, World" > >>>>>>> > >>>>>>> Publishers talk, subscribers listen. > >>>>>>> > >>>>>>> > >>>>>>> SERVICES: > >>>>>>> > >>>>>>> Any process can make a REQUEST of a SERVICE and get a RESPONSE. > >>>>>>> This is basically a remote function call. > >>>>>>> > >>>>>>> > >>>>>>> > >>>>>>> Axiom in parallel? > >>>>>>> > >>>>>>> So domains could run, each in its own process. It could provide > >>>>>>> services, one for each function. Any other process could request > >>>>>>> a computation and get the result as a response. Domains could > >>>>>>> request services from other domains, either waiting for responses > >>>>>>> or continuing while the response is being computed. > >>>>>>> > >>>>>>> The output could be sent anywhere, to a terminal, to a browser, > >>>>>>> to a network, or to another process using the publish/subscribe > >>>>>>> protocol, potentially all at the same time since there can be many= > >>>>>>> subscribers to a topic. > >>>>>>> > >>>>>>> Available domains could be dynamically added by announcing > >>>>>>> themselves as new "topics" and could be dynamically looked-up > >>>>>>> at runtime. > >>>>>>> > >>>>>>> This structure allows function-level / domain-level parallelism. > >>>>>>> It is very effective in the robot world and I think it might be a > >>>>>>> good structuring mechanism to allow computational mathematics > >>>>>>> to take advantage of multiple processors in a disciplined fashion.= > >>>>>>> > >>>>>>> Axiom has a thousand domains and each could run on its own core. > >>>>>>> > >>>>>>> In addition. notice that each domain is independent of the others.= > >>>>>>> So if we want to use BLAS Fortran code, it could just be another > >>>>>>> service node. In fact, any "foreign function" could transparently > >>>>>>> cooperate in a distributed Axiom. > >>>>>>> > >>>>>>> Another key feature is that proofs can be "by node". > >>>>>>> > >>>>>>> Tim > >>>>>>> > >>>>>>> > >>>>>>> > >>>>>>> > >>>>>>> On 6/5/21, Tim Daly wrote: > >>>>>>>> Axiom is based on first-class dependent types. Deciding when > >>>>>>>> two types are equivalent may involve computation. See > >>>>>>>> Christiansen, David Thrane "Checking Dependent Types with > >>>>>>>> Normalization by Evaluation" (2019) > >>>>>>>> > >>>>>>>> This puts an interesting constraint on building types. The > >>>>>>>> constructed types has to export a function to decide if a > >>>>>>>> given type is "equivalent" to itself. > >>>>>>>> > >>>>>>>> The notion of "equivalence" might involve category ideas > >>>>>>>> of natural transformation and univalence. Sigh. > >>>>>>>> > >>>>>>>> That's an interesting design point. > >>>>>>>> > >>>>>>>> Tim > >>>>>>>> > >>>>>>>> > >>>>>>>> On 5/5/21, Tim Daly wrote: > >>>>>>>>> It is interesting that programmer's eyes and expectations adapt > >>>>>>>>> to the tools they use. For instance, I use emacs and expect to > >>>>>>>>> work directly in files and multiple buffers. When I try to use o= ne > >>>>>>>>> of the many IDE tools I find they tend to "get in the way". I > >>>>>>>>> already > >>>>>>>>> know or can quickly find whatever they try to tell me. If you us= e > >>>>>>>>> an > >>>>>>>>> IDE you probably find emacs "too sparse" for programming. > >>>>>>>>> > >>>>>>>>> Recently I've been working in a sparse programming environment. > >>>>>>>>> I'm exploring the question of running a proof checker in an FPGA= . > >>>>>>>>> The FPGA development tools are painful at best and not intuitive= > >>>>>>>>> since you SEEM to be programming but you're actually describing > >>>>>>>>> hardware gates, connections, and timing. This is an environment > >>>>>>>>> where everything happens all-at-once and all-the-time (like the > >>>>>>>>> circuits in your computer). It is the "assembly language of > >>>>>>>>> circuits". > >>>>>>>>> Naturally, my eyes have adapted to this rather raw level. > >>>>>>>>> > >>>>>>>>> That said, I'm normally doing literate programming all the time.= > >>>>>>>>> My typical file is a document which is a mixture of latex and > >>>>>>>>> lisp. > >>>>>>>>> It is something of a shock to return to that world. It is clear > >>>>>>>>> why > >>>>>>>>> people who program in Python find lisp to be a "sea of parens". > >>>>>>>>> Yet as a lisp programmer, I don't even see the parens, just code= . > >>>>>>>>> > >>>>>>>>> It takes a few minutes in a literate document to adapt vision to= > >>>>>>>>> see the latex / lisp combination as natural. The latex markup, > >>>>>>>>> like the lisp parens, eventually just disappears. What remains > >>>>>>>>> is just lisp and natural language text. > >>>>>>>>> > >>>>>>>>> This seems painful at first but eyes quickly adapt. The upside > >>>>>>>>> is that there is always a "finished" document that describes the= > >>>>>>>>> state of the code. The overhead of writing a paragraph to > >>>>>>>>> describe a new function or change a paragraph to describe the > >>>>>>>>> changed function is very small. > >>>>>>>>> > >>>>>>>>> Using a Makefile I latex the document to generate a current PDF > >>>>>>>>> and then I extract, load, and execute the code. This loop catche= s > >>>>>>>>> errors in both the latex and the source code. Keeping an open fi= le > >>>>>>>>> in > >>>>>>>>> my pdf viewer shows all of the changes in the document after eve= ry > >>>>>>>>> run of make. That way I can edit the book as easily as the code.= > >>>>>>>>> > >>>>>>>>> Ultimately I find that writing the book while writing the code i= s > >>>>>>>>> more productive. I don't have to remember why I wrote something > >>>>>>>>> since the explanation is already there. > >>>>>>>>> > >>>>>>>>> We all have our own way of programming and our own tools. > >>>>>>>>> But I find literate programming to be a real advance over IDE > >>>>>>>>> style programming and "raw code" programming. > >>>>>>>>> > >>>>>>>>> Tim > >>>>>>>>> > >>>>>>>>> > >>>>>>>>> On 2/27/21, Tim Daly wrote: > >>>>>>>>>> The systems I use have the interesting property of > >>>>>>>>>> "Living within the compiler". > >>>>>>>>>> > >>>>>>>>>> Lisp, Forth, Emacs, and other systems that present themselves > >>>>>>>>>> through the Read-Eval-Print-Loop (REPL) allow the > >>>>>>>>>> ability to deeply interact with the system, shaping it to your > >>>>>>>>>> need. > >>>>>>>>>> > >>>>>>>>>> My current thread of study is software architecture. See > >>>>>>>>>> https://www.youtube.com/watch?v=3DW2hagw1VhhI&feature=3Dyoutu.b= e > >>>>>>>>>> and https://www.georgefairbanks.com/videos/ > >>>>>>>>>> > >>>>>>>>>> My current thinking on SANE involves the ability to > >>>>>>>>>> dynamically define categories, representations, and functions > >>>>>>>>>> along with "composition functions" that permits choosing a > >>>>>>>>>> combination at the time of use. > >>>>>>>>>> > >>>>>>>>>> You might want a domain for handling polynomials. There are > >>>>>>>>>> a lot of choices, depending on your use case. You might want > >>>>>>>>>> different representations. For example, you might want dense, > >>>>>>>>>> sparse, recursive, or "machine compatible fixnums" (e.g. to > >>>>>>>>>> interface with C code). If these don't exist it ought to be > >>>>>>>>>> possible > >>>>>>>>>> to create them. Such "lego-like" building blocks require carefu= l > >>>>>>>>>> thought about creating "fully factored" objects. > >>>>>>>>>> > >>>>>>>>>> Given that goal, the traditional barrier of "compiler" vs > >>>>>>>>>> "interpreter" > >>>>>>>>>> does not seem useful. It is better to "live within the compiler= " > >>>>>>>>>> which > >>>>>>>>>> gives the ability to define new things "on the fly". > >>>>>>>>>> > >>>>>>>>>> Of course, the SANE compiler is going to want an associated > >>>>>>>>>> proof of the functions you create along with the other parts > >>>>>>>>>> such as its category hierarchy and representation properties. > >>>>>>>>>> > >>>>>>>>>> There is no such thing as a simple job. :-) > >>>>>>>>>> > >>>>>>>>>> Tim > >>>>>>>>>> > >>>>>>>>>> > >>>>>>>>>> On 2/18/21, Tim Daly wrote: > >>>>>>>>>>> The Axiom SANE compiler / interpreter has a few design points.= > >>>>>>>>>>> > >>>>>>>>>>> 1) It needs to mix interpreted and compiled code in the same > >>>>>>>>>>> function. > >>>>>>>>>>> SANE allows dynamic construction of code as well as dynamic ty= pe > >>>>>>>>>>> construction at runtime. Both of these can occur in a runtime > >>>>>>>>>>> object. > >>>>>>>>>>> So there is potentially a mixture of interpreted and compiled > >>>>>>>>>>> code. > >>>>>>>>>>> > >>>>>>>>>>> 2) It needs to perform type resolution at compile time without= > >>>>>>>>>>> overhead > >>>>>>>>>>> where possible. Since this is not always possible there needs t= o > >>>>>>>>>>> be > >>>>>>>>>>> a "prefix thunk" that will perform the resolution. Trivially, > >>>>>>>>>>> for > >>>>>>>>>>> example, > >>>>>>>>>>> if we have a + function we need to type-resolve the arguments.= > >>>>>>>>>>> > >>>>>>>>>>> However, if we can prove at compile time that the types are bo= th > >>>>>>>>>>> bounded-NNI and the result is bounded-NNI (i.e. fixnum in lisp= ) > >>>>>>>>>>> then we can inline a call to + at runtime. If not, we might ha= ve > >>>>>>>>>>> + applied to NNI and POLY(FLOAT), which requires a thunk to > >>>>>>>>>>> resolve types. The thunk could even "specialize and compile" > >>>>>>>>>>> the code before executing it. > >>>>>>>>>>> > >>>>>>>>>>> It turns out that the Forth implementation of > >>>>>>>>>>> "threaded-interpreted" > >>>>>>>>>>> languages model provides an efficient and effective way to do > >>>>>>>>>>> this.[0] > >>>>>>>>>>> Type resolution can be "inserted" in intermediate thunks. > >>>>>>>>>>> The model also supports dynamic overloading and tail recursion= . > >>>>>>>>>>> > >>>>>>>>>>> Combining high-level CLOS code with low-level threading gives a= n > >>>>>>>>>>> easy to understand and robust design. > >>>>>>>>>>> > >>>>>>>>>>> Tim > >>>>>>>>>>> > >>>>>>>>>>> [0] Loeliger, R.G. "Threaded Interpretive Languages" (1981) > >>>>>>>>>>> ISBN 0-07-038360-X > >>>>>>>>>>> > >>>>>>>>>>> > >>>>>>>>>>> > >>>>>>>>>>> > >>>>>>>>>>> On 2/5/21, Tim Daly wrote: > >>>>>>>>>>>> I've worked hard to make Axiom depend on almost no other > >>>>>>>>>>>> tools so that it would not get caught by "code rot" of > >>>>>>>>>>>> libraries. > >>>>>>>>>>>> > >>>>>>>>>>>> However, I'm also trying to make the new SANE version much > >>>>>>>>>>>> easier to understand and debug.To that end I've been > >>>>>>>>>>>> experimenting > >>>>>>>>>>>> with some ideas. > >>>>>>>>>>>> > >>>>>>>>>>>> It should be possible to view source code, of course. But the= > >>>>>>>>>>>> source > >>>>>>>>>>>> code is not the only, nor possibly the best, representation o= f > >>>>>>>>>>>> the > >>>>>>>>>>>> ideas. > >>>>>>>>>>>> In particular, source code gets compiled into data structures= . > >>>>>>>>>>>> In > >>>>>>>>>>>> Axiom > >>>>>>>>>>>> these data structures really are a graph of related structure= s. > >>>>>>>>>>>> > >>>>>>>>>>>> For example, looking at the gcd function from NNI, there is t= he > >>>>>>>>>>>> representation of the gcd function itself. But there is also a= > >>>>>>>>>>>> structure > >>>>>>>>>>>> that is the REP (and, in the new system, is separate from the= > >>>>>>>>>>>> domain). > >>>>>>>>>>>> > >>>>>>>>>>>> Further, there are associated specification and proof > >>>>>>>>>>>> structures. > >>>>>>>>>>>> Even > >>>>>>>>>>>> further, the domain inherits the category structures, and fro= m > >>>>>>>>>>>> those > >>>>>>>>>>>> it > >>>>>>>>>>>> inherits logical axioms and definitions through the proof > >>>>>>>>>>>> structure. > >>>>>>>>>>>> > >>>>>>>>>>>> Clearly the gcd function is a node in a much larger graph > >>>>>>>>>>>> structure. > >>>>>>>>>>>> > >>>>>>>>>>>> When trying to decide why code won't compile it would be usef= ul > >>>>>>>>>>>> to > >>>>>>>>>>>> be able to see and walk these structures. I've thought about > >>>>>>>>>>>> using > >>>>>>>>>>>> the > >>>>>>>>>>>> browser but browsers are too weak. Either everything has to b= e > >>>>>>>>>>>> "in > >>>>>>>>>>>> a > >>>>>>>>>>>> single tab to show the graph" or "the nodes of the graph are i= n > >>>>>>>>>>>> different > >>>>>>>>>>>> tabs". Plus, constructing dynamic graphs that change as the > >>>>>>>>>>>> software > >>>>>>>>>>>> changes (e.g. by loading a new spad file or creating a new > >>>>>>>>>>>> function) > >>>>>>>>>>>> represents the huge problem of keeping the browser "in sync > >>>>>>>>>>>> with > >>>>>>>>>>>> the > >>>>>>>>>>>> Axiom workspace". So something more dynamic and embedded is > >>>>>>>>>>>> needed. > >>>>>>>>>>>> > >>>>>>>>>>>> Axiom source gets compiled into CLOS data structures. Each of= > >>>>>>>>>>>> these > >>>>>>>>>>>> new SANE structures has an associated surface representation,= > >>>>>>>>>>>> so > >>>>>>>>>>>> they > >>>>>>>>>>>> can be presented in user-friendly form. > >>>>>>>>>>>> > >>>>>>>>>>>> Also, since Axiom is literate software, it should be possible= > >>>>>>>>>>>> to > >>>>>>>>>>>> look > >>>>>>>>>>>> at > >>>>>>>>>>>> the code in its literate form with the surrounding explanatio= n. > >>>>>>>>>>>> > >>>>>>>>>>>> Essentially we'd like to have the ability to "deep dive" into= > >>>>>>>>>>>> the > >>>>>>>>>>>> Axiom > >>>>>>>>>>>> workspace, not only for debugging, but also for understanding= > >>>>>>>>>>>> what > >>>>>>>>>>>> functions are used, where they come from, what they inherit, > >>>>>>>>>>>> and > >>>>>>>>>>>> how they are used in a computation. > >>>>>>>>>>>> > >>>>>>>>>>>> To that end I'm looking at using McClim, a lisp windowing > >>>>>>>>>>>> system. > >>>>>>>>>>>> Since the McClim windows would be part of the lisp image, the= y > >>>>>>>>>>>> have > >>>>>>>>>>>> access to display (and modify) the Axiom workspace at all > >>>>>>>>>>>> times. > >>>>>>>>>>>> > >>>>>>>>>>>> The only hesitation is that McClim uses quicklisp and drags i= n > >>>>>>>>>>>> a > >>>>>>>>>>>> lot > >>>>>>>>>>>> of other subsystems. It's all lisp, of course. > >>>>>>>>>>>> > >>>>>>>>>>>> These ideas aren't new. They were available on Symbolics > >>>>>>>>>>>> machines, > >>>>>>>>>>>> a truly productive platform and one I sorely miss. > >>>>>>>>>>>> > >>>>>>>>>>>> Tim > >>>>>>>>>>>> > >>>>>>>>>>>> > >>>>>>>>>>>> > >>>>>>>>>>>> On 1/19/21, Tim Daly wrote: > >>>>>>>>>>>>> Also of interest is the talk > >>>>>>>>>>>>> "The Unreasonable Effectiveness of Dynamic Typing for > >>>>>>>>>>>>> Practical > >>>>>>>>>>>>> Programs" > >>>>>>>>>>>>> https://vimeo.com/74354480 > >>>>>>>>>>>>> which questions whether static typing really has any benefit= . > >>>>>>>>>>>>> > >>>>>>>>>>>>> Tim > >>>>>>>>>>>>> > >>>>>>>>>>>>> > >>>>>>>>>>>>> On 1/19/21, Tim Daly wrote: > >>>>>>>>>>>>>> Peter Naur wrote an article of interest: > >>>>>>>>>>>>>> http://pages.cs.wisc.edu/~remzi/Naur.pdf > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> In particular, it mirrors my notion that Axiom needs > >>>>>>>>>>>>>> to embrace literate programming so that the "theory > >>>>>>>>>>>>>> of the problem" is presented as well as the "theory > >>>>>>>>>>>>>> of the solution". I quote the introduction: > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> This article is, to my mind, the most accurate account > >>>>>>>>>>>>>> of what goes on in designing and coding a program. > >>>>>>>>>>>>>> I refer to it regularly when discussing how much > >>>>>>>>>>>>>> documentation to create, how to pass along tacit > >>>>>>>>>>>>>> knowledge, and the value of the XP's metaphor-setting > >>>>>>>>>>>>>> exercise. It also provides a way to examine a methodolgy's > >>>>>>>>>>>>>> economic structure. > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> In the article, which follows, note that the quality of the= > >>>>>>>>>>>>>> designing programmer's work is related to the quality of > >>>>>>>>>>>>>> the match between his theory of the problem and his theory > >>>>>>>>>>>>>> of the solution. Note that the quality of a later > >>>>>>>>>>>>>> programmer's > >>>>>>>>>>>>>> work is related to the match between his theories and the > >>>>>>>>>>>>>> previous programmer's theories. > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> Using Naur's ideas, the designer's job is not to pass along= > >>>>>>>>>>>>>> "the design" but to pass along "the theories" driving the > >>>>>>>>>>>>>> design. > >>>>>>>>>>>>>> The latter goal is more useful and more appropriate. It als= o > >>>>>>>>>>>>>> highlights that knowledge of the theory is tacit in the > >>>>>>>>>>>>>> owning, > >>>>>>>>>>>>>> and > >>>>>>>>>>>>>> so passing along the thoery requires passing along both > >>>>>>>>>>>>>> explicit > >>>>>>>>>>>>>> and tacit knowledge. > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> Tim > >>>>>>>>>>>>>> > >>>>>>>>>>>>> > >>>>>>>>>>>> > >>>>>>>>>>> > >>>>>>>>>> > >>>>>>>>> > >>>>>>>> > >>>>>>> > >>>>>> > >>>>> > >>>> > >>> > >> > > --Apple-Mail-18CBE148-0FFC-47D9-9D95-5D7EFE73CB6B Content-Transfer-Encoding: quoted-printable Content-Type: text/html; charset=utf-8
The setup of a spreadsheet= is similar to the design of some prototype based programming languages. I h= ave recently been learning about the Self programming language, a descendant= of Smalltalk, which uses prototypes rather than classes. This creates a mor= e dynamic environment.

Like Smalltalk, development i= s done using a virtual machine with a programming environment. The objects i= n the Self environment can be configured using direct manipulation to define= behavior and inheritance.

https://selflanguage.org/

It should be p= ossible to develop a similar prototype system based in lisp, if it has not b= een done already.

On Mar 23, 2022, at 1:53, Tim Daly <axiomcas@gmail.com> wrote:

I have a d= eep interest in self-modifying programs. These are
trivial to crea= te in lisp. The question is how to structure a computer
algebra pr= ogram so it could be dynamically modified but still
well-stru= ctured. This is especially important since the user can
create new= logical types at runtime.

One innovation, which I h= ave never seen anywhere, is to structure
the program in a spreadsh= eet fashion. Spreadsheets cells can
reference other spreadsheet ce= lls. Spreadsheets have a well-defined
evaluation method. This is e= quivalent to a form of object-oriented
programming where each cell= is an object and has a well-defined
inheritance hierarchy through= other cells. Simple manipulation
allows insertion, deletion, or m= odification of these chains at any time.

So by stru= cturing a computer algebra program like a spreadsheet
I am able to= dynamically adjust a program to fit the problem to be
solved, tra= ck which cells are needed, and optimize the program.

Spreadsheets do this naturally but I'm unaware of any non-spreadsheet
program that relies on that organization.

Tim=


On Sun, Mar 13, 2022 at 4:01 AM Tim Daly <axiomcas@gmail.com> wrote:
Axiom has a= n awkward 'attributes' category structure.

In the S= ANE version it is clear that these attributes are much
closer to l= ogic 'definitions'. As a result one of the changes
is to create a n= ew 'category'-type structure for definitions.
There will be a new k= eyword, like the category keyword,
'definition'.

Tim



Any function can have ++X comments added. These will
appea= r as examples when the function is )display For example,
in Permut= ationGroup there is a function 'strongGenerators'
defined as:

  strongGenerators : % -> L PERM S
&n= bsp;   ++ strongGenerators(gp) returns strong generators for
=
    ++ the group gp.
    ++
    ++X S:List(Integer) :=3D [1,2,3,4]
 = ;   ++X G :=3D symmetricGroup(S)
    ++X s= trongGenerators(G)



La= ter, in the interpreter we see:



=

)d op strongGenerators

&n= bsp; There is one exposed function called strongGenerators :
 = ;     [1] PermutationGroup(D2) -> List(Permutation(D2= )) from
         &nbs= p;     PermutationGroup(D2)
   =             &nbs= p; if D2 has SETCAT

  Examples of strongGenera= tors from PermutationGroup
 
  S:List(Int= eger) :=3D [1,2,3,4]
  G :=3D symmetricGroup(S)
&nb= sp; strongGenerators(G)




This will show a working example for functions that the
user can copy and use. It is especially useful to show how
to construct working arguments.

These "example" fu= nctions are run at build time when
the make command looks like
    make TESTSET=3Dalltests

I hope to add this documentation to all Axiom functions.

In addition, the plan is to add these function calls to the
usual test documentation. That means that all of these examples
will be run and show their output in the final distribution
(mnt/= ubuntu/doc/src/input/*.dvi files) so the user can view
the expecte= d output.

Tim




It turns out that creating SPAD-looking output is trivial
= in Common Lisp. Each class can have a custom print
routine so sign= atures and ++ comments can each be
printed with their own format.<= /div>

To ensure that I maintain compatibility I'll be pri= nting
the categories and domains so they look like SPAD code,
at least until I get the proof technology integrated. I will
probably specialize the proof printers to look like the
original L= EAN proof syntax.

Internally, however, it will all b= e Common Lisp.

Common Lisp makes so many desirable f= eatures so easy.
It is possible to trace dynamically at any level.= One could
even write a trace that showed how Axiom arrived at the=
solution. Any domain could have special case output syntax
<= div>without affecting any other domain so one could write a
tree-l= ike output for proofs. Using greek characters is trivial
so the in= put and output notation is more mathematical.

T= im


On Thu, Feb 24, 2022 at 10:24 AM Tim Daly <axiomcas@gmail.com> wro= te:
Axiom's SPAD code compiles to Common Lisp.
The AKCL versio= n of Common Lisp compiles to C.
Three languages and 2 compilers is= a lot to maintain.
Further, there are very few people able to wri= te SPAD
and even fewer people able to maintain it.
<= br>
I've decided that the SANE version of Axiom will be
=
implemented in pure Common Lisp. I've outlined Axiom's
ca= tegory / type hierarchy in the Common Lisp Object
System (CLOS). I= am now experimenting with re-writing
the functions into Common Li= sp.

This will have several long-term effects. I= t simplifies
the implementation issues. SPAD code blocks a lot of<= /div>
actions and optimizations that Common Lisp provides.
The= Common Lisp language has many more people
who can read, modif= y, and maintain code. It provides for
interoperability with other C= ommon Lisp projects with
no effort. Common Lisp is an internationa= l standard
which ensures that the code will continue to run.

The input / output mathematics will remain the same.=
Indeed, with the new generalizations for first-class
de= pendent types it will be more general.

This is a bi= g change, similar to eliminating BOOT code
and moving to Literate P= rogramming. This will provide a
better platform for future researc= h work. Current research
is focused on merging Axiom's computer al= gebra mathematics
with Lean's proof language. The goal is to creat= e a system for
 "computational mathematics".
Research is the whole point of Axiom.

= Tim



=
On Sat, Jan 22, 2022 at 9:16 PM Tim Da= ly <axiomcas@gmai= l.com> wrote:
<= br>
One of the interesting= questions when obtaining a result
is "what functions were called a= nd what was their return value?"
Otherwise known as the "show your= work" idea.

There is an idea called the "writer mo= nad" [0], usually
implemented to facilitate logging. We can e= xploit this
idea to provide "show your work" capability. Each func= tion
can provide this information inside the monad enabling the
question to be answered at any time.

For th= ose unfamiliar with the monad idea, the best explanation
I've foun= d is this video [1].

Tim

[0] Deriving the writer monad from first principles

[1] The Absolute Best Intro to Monads fo= r Software Engineers

On Mon, Dec 13, 2021 at 12:30 AM Tim Daly <axiomcas@gmail.com> wrote:
...= (snip)...

Common Lisp has an "open compiler". T= hat allows the ability
to deeply modify compiler behavior using co= mpiler macros
and macros in general. CLOS takes advantage of this t= o add
typed behavior into the compiler in a way that ALLOWS strict=
typing such as found in constructive type theory and ML.
Judgments, ala Crary, are front-and-center.

W= hether you USE the discipline afforded is the real question.

<= /div>
Indeed, the Axiom research struggle is essentially one of how
to have a disciplined use of first-class dependent types. The
struggle raises issues of, for example, compiling a dependent
ty= pe whose argument is recursive in the compiled type. Since
the new= type is first-class it can be constructed at what you
improperly c= all "run-time". However, it appears that the recursive
type may ha= ve to call the compiler at each recursion to generate
the next ste= p since in some cases it cannot generate "closed code".

I am embedding proofs (in LEAN language) into the type
hi= erarchy so that theorems, which depend on the type hierarchy,
= are correctly inherited. The compiler has to check the proofs of functions
at compile time using these. Hacking up nonsense just won't cut it.= Think
of the problem of embedding LEAN proofs in ML or ML in LEAN= .
(Actually, Jeremy Avigad might find that research interesting.)<= /div>

So Matrix(3,3,Float) has inverses (assuming Float i= s a
field (cough)). The type inherits this theorem and proofs of
functions can use this. But Matrix(3,4,Integer) does not have
=
inverses so the proofs cannot use this. The type hierarchy has
to ensure that the proper theorems get inherited.

=
Making proof technology work at compile time is hard.
(Worse y= et, LEAN is a moving target. Sigh.)



On Thu, Nov 2= 5, 2021 at 9:43 AM Tim Daly <axiomcas@gmail.com> wrote:

As you know I've been re-architecting Axiom to use fi= rst class
dependent types and proving the algorithms correct. For e= xample,
the GCD of natural numbers or the GCD of polynomials.

The idea involves "boxing up" the proof with the algori= thm (aka
proof carrying code) in the ELF file (under a crypto hash= so it
can't be changed).

Once the code i= s running on the CPU, the proof is run in parallel
on the field pr= ogrammable gate array (FPGA). Intel data center
servers have CPUs w= ith built-in FPGAs these days.

There is a bit of a d= isconnect, though. The GCD code is compiled
machine code but the p= roof is LEAN-level.

What would be ideal is if the c= ompiler not only compiled the GCD
code to machine code, it also co= mpiled the proof to "machine code".
That is, for each machine inst= ruction, the FPGA proof checker
would ensure that the proof was no= t violated at the individual
instruction level.

=
What does it mean to "compile a proof to the machine code level"?=

The Milawa effort (Myre14.pdf) does incremental pr= oofs in layers.
To quote from the article [0]:

<= /div>
   We begin with a simple proof checker, call it A, whic= h is short
   enough to verify by the ``social process''= of mathematics -- and
  more recently with a theorem prover f= or a more expressive logic.

   We then de= velop a series of increasingly powerful proof checkers,
  cal= l the B, C, D, and so on. We show each of these programs only
&nbs= p;  accepts the same formulas as A, using A to verify B, and B to verif= y
   C, and so on. Then, since we trust A, and A says B i= s trustworthy, we
   can trust B. Then, since we trust B= , and B says C is trustworthy, we
   can trust C.

This gives a technique for "compiling the proof" dow= n the the machine
code level. Ideally, the compiler would have jud= gments for each step of
the compilation so that each compile step h= as a justification. I don't
know of any compiler that does this ye= t. (References welcome).

At the machine co= de level, there are techniques that would allow
the FPGA proof to "= step in sequence" with the executing code.
Some work has been d= one on using "Hoare Logic for Realistically
Modelled Machine Code"= (paper attached, Myre07a.pdf),
"Decompilation into Logic -- Impro= ved (Myre12a.pdf).

So the game is to construct a GC= D over some type (Nats, Polys, etc.
Axiom has 22), compile the dep= endent type GCD to machine code.
In parallel, the proof of the cod= e is compiled to machine code. The
pair is sent to the CPU/FPGA an= d, while the algorithm runs, the FPGA
ensures the proof is not vio= lated, instruction by instruction.

(I'm ignoring ma= chine architecture issues such pipelining, out-of-order,
branch pr= ediction, and other machine-level things to ponder. I'm looking
at= the RISC-V Verilog details by various people to understand better but
=
it is still a "misty fog" for me.)

The res= ult is proven code "down to the metal".

Tim




On Thu= , Nov 25, 2021 at 6:05 AM Tim Daly <axiomcas@gmail.com> wrote:

As you know I've been re-architecting Axiom to use first class
de= pendent types and proving the algorithms correct. For example,
the= GCD of natural numbers or the GCD of polynomials.

= The idea involves "boxing up" the proof with the algorithm (aka
pr= oof carrying code) in the ELF file (under a crypto hash so it
can'= t be changed).

Once the code is running on the CPU,= the proof is run in parallel
on the field programmable gate array= (FPGA). Intel data center
servers have CPUs with built-in FPGAs t= hese days.

There is a bit of a disconnect, though. T= he GCD code is compiled
machine code but the proof is LEAN-level.<= /div>

What would be ideal is if the compiler not only com= piled the GCD
code to machine code, it also compiled the proof to "= machine code".
That is, for each machine instruction, the FPGA pro= of checker
would ensure that the proof was not violated at the ind= ividual
instruction level.

What does i= t mean to "compile a proof to the machine code level"?

<= div>The Milawa effort (Myre14.pdf) does incremental proofs in layers.
<= div>To quote from the article [0]:

  = We begin with a simple proof checker, call it A, which is short
&= nbsp;  enough to verify by the ``social process'' of mathematics -- and=
  more recently with a theorem prover for a more expressive l= ogic.

   We then develop a series of incr= easingly powerful proof checkers,
  call the B, C, D, and so o= n. We show each of these programs only
   accepts the sa= me formulas as A, using A to verify B, and B to verify
  = ; C, and so on. Then, since we trust A, and A says B is trustworthy, we
   can trust B. Then, since we trust B, and B says C is trus= tworthy, we
   can trust C.

This gives a technique for "compiling the proof" down the the machine
code level. Ideally, the compiler would have judgments for each step o= f
the compilation so that each compile step has a justification. I= don't
know of any compiler that does this yet. (References welcom= e).

At the machine code level, there are t= echniques that would allow
the FPGA proof to "step in sequence" wi= th the executing code.
Some work has been done on using "Hoare= Logic for Realistically
Modelled Machine Code" (paper attached, M= yre07a.pdf),
"Decompilation into Logic -- Improved (Myre12a.pdf).<= /div>

So the game is to construct a GCD over some type (N= ats, Polys, etc.
Axiom has 22), compile the dependent type GCD to m= achine code.
In parallel, the proof of the code is compiled to mac= hine code. The
pair is sent to the CPU/FPGA and, while the algorit= hm runs, the FPGA
ensures the proof is not violated, instruction b= y instruction.

(I'm ignoring machine architecture i= ssues such pipelining, out-of-order,
branch prediction, and other m= achine-level things to ponder. I'm looking
at the RISC-V Verilog d= etails by various people to understand better but
it is still a "m= isty fog" for me.)

The result is proven code "d= own to the metal".

Tim



On Sat, Nov 13, 2021 at 5:28 PM T= im Daly <axiomcas= @gmail.com> wrote:
Full support for general, first-class dependent= types requires
some changes to the Axiom design. That implies som= e language
design questions.

Given that m= athematics is such a general subject with a lot of
"local" notatio= n and ideas (witness logical judgment notation)
careful thought is= needed to design a language that is able to
handle a wide range.<= /div>

Normally language design is a two-level process. Th= e language
designer creates a language and then an implementation.= Various
design choices affect the final language.
<= br>
There is "The Metaobject Protocol" (MOP)
which encourages a three-level proces= s. The language designer
works at a Metalevel to design a fam= ily of languages, then the
language specializations, then the impl= ementation. A MOP design
allows the language user to optimize the l= anguage to their problem.

A simple paper on the sub= ject is "Metaobject Protocols"

Tim


On Mon, Oct 25,= 2021 at 7:42 PM Tim Daly <axiomcas@gmail.com> wrote:




On Thu, Oct 21, 2021 at 9:50 AM Tim Daly <axiomcas@gmail.com> wrote:
<= /div>
= So the current struggle involves the categories in Axiom.

The categories and domains constructed using categories
are= dependent types. When are dependent types "equal"?
Well, hummmm, t= hat depends on the arguments to the
constructor.

But in order to decide(?) equality we have to evaluate
the= arguments (which themselves can be dependent types).
Indeed, we m= ay, and in general, we must evaluate the
arguments at compile= time (well, "construction time" as
there isn't really a compiler /= interpreter separation anymore.)

That raises t= he question of what "equality" means. This
is not simply a "set eq= uality" relation. It falls into the
infinite-groupoid of homotopy t= ype theory. In general
it appears that deciding category / domain e= quivalence
might force us to climb the type hierarchy.
<= br>
Beyond that, there is the question of "which proof"
= applies to the resulting object. Proofs depend on their
assumption= s which might be different for different
constructions. As yet I h= ave no clue how to "index"
proofs based on their assumptions, nor h= ow to
connect these assumptions to the groupoid structure.

My brain hurts.

Tim


On Mon, Oct 18, 2021 at 2:00 AM Tim Daly <axiomcas@gmail.com> wrote:
"B= irthing Computational Mathematics"

The Axiom SANE p= roject is difficult at a very fundamental
level. The title "SANE" w= as chosen due to the various
words found in a thesuarus... "ration= al", "coherent",
"judicious" and "sound".

These are very high level, amorphous ideas. But so is
the design o= f SANE. Breaking away from tradition in
computer algebra, type the= ory, and proof assistants
is very difficult. Ideas tend to fall in= to standard jargon
which limits both the frame of thinking (e.g. d= ependent
types) and the content (e.g. notation).

Questioning both frame and content is very difficult.
It i= s hard to even recognize when they are accepted
"by default" rathe= r than "by choice". What does the idea
"power tools" mean in a= primitive, hand labor culture?

Christopher Ale= xander [0] addresses this problem in
a lot of his writing. Specifi= cally, in his book "Notes on
the Synthesis of Form", in his chapte= r 5 "The Selfconsious
Process", he addresses this problem directly= . This is a
"must read" book.

Unlike b= uilding design and contruction, however, there
are almost no const= raints to use as guides. Alexander
quotes Plato's Phaedrus:
<= div>
  "First, the taking in of scattered particulars und= er
   one Idea, so that everyone understands what is bei= ng
   talked about ... Second, the separation of the Ide= a
   into parts, by dividing it at the joints, as nature=
   directs, not breaking any limb in half as a bad
=
   carver might."

Lisp, wh= ich has been called "clay for the mind" can
build virtually anythi= ng that can be thought. The
"joints" are also "of one's choos= ing" so one is
both carver and "nature".

<= div>Clearly the problem is no longer "the tools".
*I* am the probl= em constraining the solution.
Birthing this "new thing" is slow, d= ifficult, and
uncertain at best.

Tim

[0] Alexander, Christopher "Notes on the Synthesis
of Form" Harvard University Press 1964
ISBN 0-674-62751= -2


On Sun, Oct 10, 2021 at 4:40 PM Tim Daly <axiomcas@gmail.com> wro= te:
Re: writing a p= aper... I'm not connected to Academia
so anything I'd write would never make it into print.

"Language level parsing" is still a long way off. The talk
by Guy Steele [2] highlights some of the problems we
currently face using mathematical metanotation.

For example, a professor I know at CCNY (City College
of New York) didn't understand Platzer's "funny
fraction notation" (proof judgements) despite being
an expert in Platzer's differential equations area.

Notation matters and is not widely common.

I spoke to Professor Black (in LTI) about using natural
language in the limited task of a human-robot cooperation
in changing a car tire.  I looked at the current machine
learning efforts. They are no where near anything but
toy systems, taking too long to train and are too fragile.

Instead I ended up using a combination of AIML [3]
(Artificial Intelligence Markup Language), the ALICE
Chatbot [4], Forgy's OPS5 rule based program [5],
and Fahlman's SCONE [6] knowledge base. It was
much less fragile in my limited domain problem.

I have no idea how to extend any system to deal with
even undergraduate mathematics parsing.

Nor do I have any idea how I would embed LEAN
knowledge into a SCONE database, although I
think the combination would be useful and interesting.

I do believe that, in the limited area of computational
mathematics, we are capable of building robust, proven
systems that are quite general and extensible. As you
might have guessed I've given it a lot of thought over
the years :-)

A mathematical language seems to need >6 components

1) We need some sort of a specification language, possibly
somewhat 'propositional' that introduces the assumptions
you mentioned (ref. your discussion of numbers being
abstract and ref. your discussion of relevant choice of
assumptions related to a problem).

This is starting to show up in the hardware area (e.g.
Lamport's TLC[0])

Of course, specifications relate to proving programs
and, as you recall, I got a cold reception from the
LEAN community about using LEAN for program proofs.

2) We need "scaffolding". That is, we need a theory
that can be reduced to some implementable form
that provides concept-level structure.

Axiom uses group theory for this. Axiom's "category"
structure has "Category" things like Ring. Claiming
to be a Ring brings in a lot of "Signatures" of functions
you have to implement to properly be a Ring.

Scaffolding provides a firm mathematical basis for
design. It provides a link between the concept of a
Ring and the expectations you can assume when
you claim your "Domain" "is a Ring". Category
theory might provide similar structural scaffolding
(eventually... I'm still working on that thought garden)

LEAN ought to have a textbook(s?) that structures
the world around some form of mathematics. It isn't
sufficient to say "undergraduate math" is the goal.
There needs to be some coherent organization so
people can bring ideas like Group Theory to the
organization. Which brings me to ...

3) We need "spreading". That is, we need to take
the various definitions and theorems in LEAN and
place them in their proper place in the scaffold.

For example, the Ring category needs the definitions
and theorems for a Ring included in the code for the
Ring category. Similarly, the Commutative category
needs the definitions and theorems that underlie
"commutative" included in the code.

That way, when you claim to be a "Commutative Ring"
you get both sets of definitions and theorems. That is,
the inheritance mechanism will collect up all of the
definitions and theorems and make them available
for proofs.

I am looking at LEAN's definitions and theorems with
an eye to "spreading" them into the group scaffold of
Axiom.

4) We need "carriers" (Axiom calls them representations,
aka "REP"). REPs allow data structures to be defined
independent of the implementation.

For example, Axiom can construct Polynomials that
have their coefficients in various forms of representation.
You can define "dense" (all coefficients in a list),
"sparse" (only non-zero coefficients), "recursive", etc.

A "dense polynomial" and a "sparse polynomial" work
exactly the same way as far as the user is concerned.
They both implement the same set of functions. There
is only a difference of representation for efficiency and
this only affects the implementation of the functions,
not their use.

Axiom "got this wrong" because it didn't sufficiently
separate the REP from the "Domain". I plan to fix this.

LEAN ought to have a "data structures" subtree that
has all of the definitions and axioms for all of the
existing data structures (e.g. Red-Black trees). This
would be a good undergraduate project.

5) We need "Domains" (in Axiom speak). That is, we
need a box that holds all of the functions that implement
a "Domain". For example, a "Polynomial Domain" would
hold all of the functions for manipulating polynomials
(e.g polynomial multiplication). The "Domain" box
is a dependent type that:

  A) has an argument list of "Categories" that this "Domain"
      box inherits. Thus, the "Integer Domain" inherits
      the definitions and axioms from "Commutative"

     Functions in the "Domain" box can now assume
     and use the properties of being commutative. Proofs
     of functions in this domain can use the definitions
     and proofs about being commutative.

  B) contains an argument that specifies the "REP"
       (aka, the carrier). That way you get all of the        functions associated with the data structure
      available for use in the implementation.

      Functions in the Domain box can use all of
      the definitions and axioms about the representation
=       (e.g. NonNegativeIntegers are always positive)

  C) contains local "spread" definitions and axioms
       that can be used in function proofs.

      For example, a "Square Matrix" domain would
      have local axioms that state that the matrix is
      always square. Thus, functions in that box could
      use these additional definitions and axioms in
      function proofs.

  D) contains local state. A "Square Matrix" domain
       would be constructed as a dependent type that
=        specified the size of the square (e.g. a 2x2
       matrix would have '2' as a dependent parameter.
  E) contains implementations of inherited functions.

       A "Category" could have a signature for a GCD
=        function and the "Category" could have a default<= br>        implementation. However, the "Domain" could
       have a locally more efficient implementation whic= h
       overrides the inherited implementation.

      Axiom has about 20 GCD implementations that
      differ locally from the default in the category. They       use properties known locally to be more efficient.

  F) contains local function signatures.

      A "Domain" gives the user more and more unique
      functions. The signature have associated
      "pre- and post- conditions" that can be used
      as assumptions in the function proofs.

      Some of the user-available functions are only
      visible if the dependent type would allow them
      to exist. For example, a general Matrix domain
      would have fewer user functions that a Square
      Matrix domain.

      In addition, local "helper" functions need their
      own signatures that are not user visible.

  G) the function implementation for each signature.

       This is obviously where all the magic happens
=
  H) the proof of each function.

       This is where I'm using LEAN.

       Every function has a proof. That proof can use        all of the definitions and axioms inherited from<= br>        the "Category", "Representation", the "Domain
=        Local", and the signature pre- and post-
       conditions.

   I) literature links. Algorithms must contain a link
      to at least one literature reference. Of course,
      since everything I do is a Literate Program
      this is obviously required. Knuth said so :-)


LEAN ought to have "books" or "pamphlets" that
bring together all of this information for a domain
such as Square Matrices. That way a user can
find all of the related ideas, available functions,
and their corresponding proofs in one place.

6) User level presentation.

    This is where the systems can differ significantly.
    Axiom and LEAN both have GCD but they use
    that for different purposes.

    I'm trying to connect LEAN's GCD and Axiom's GCD
    so there is a "computational mathematics" idea that
    allows the user to connect proofs and implementations.

7) Trust

Unlike everything else, computational mathematics
can have proven code that gives various guarantees.

I have been working on this aspect for a while.
I refer to it as trust "down to the metal" The idea is
that a proof of the GCD function and the implementation
of the GCD function get packaged into the ELF format.
(proof carrying code). When the GCD algorithm executes
on the CPU, the GCD proof is run through the LEAN
proof checker on an FPGA in parallel.

(I just recently got a PYNQ Xilinx board [1] with a CPU
and FPGA together. I'm trying to implement the LEAN
proof checker on the FPGA).

We are on the cusp of a revolution in computational
mathematics. But the two pillars (proof and computer
algebra) need to get know each other.

Tim



[0] Lamport, Leslie "Chapter on TLA+"
in "Software Specification Methods"
https://www.springer.com/gp/book/9781852333539
(I no longer have CMU library access or I'd send you
the book PDF)

[1] https://www.tul.com.tw/productspynq-z2.html

[2] https://www.youtube.com/watch?v=3DdCuZkaaou0Q
=
[3] "ARTIFICIAL INTELLIGENCE MARKUP LANGUAGE"
https://arxiv.org/pdf/1307.3091.pdf

[4] ALICE Chatbot
http://www.scielo.org.mx/pdf/cys/v= 19n4/1405-5546-cys-19-04-00625.pdf

[5] OPS5 User Manual
https://kilthub.cmu.= edu/articles/journal_contribution/OPS5_user_s_manual/6608090/1

[6] Scott Fahlman "SCONE"
http://www.cs.cmu.edu/~sef/scone/

On 9/27/21, Tim Daly <axiomcas@gmail.com> wrote:
> I have tried to maintain a list of names of people who have
> helped Axiom, going all the way back to the pre-Scratchpad
> days. The names are listed at the beginning of each book.
> I also maintain a bibliography of publications I've read or
> that have had an indirect influence on Axiom.
>
> Credit is "the coin of the realm". It is easy to share and wrong
> to ignore. It is especially damaging to those in Academia who
> are affected by credit and citations in publications.
>
> Apparently I'm not the only person who feels that way. The ACM
> Turing award seems to have ignored a lot of work:
>
> Scientific Integrity, the 2021 Turing Lecture, and the 2018 Turing
> Award for Deep Learning
> https://peop= le.idsia.ch/~juergen/scientific-integrity-turing-award-deep-learning.html
>
> I worked on an AI problem at IBM Research called Ketazolam.
> (
https://en.wikipedia.org/wiki/Ketazolam). The idea wa= s to recognize
> and associated 3D chemical drawings with their drug counterparts.
> I used Rumelhart, and McClelland's books. These books contained
> quite a few ideas that seem to be "new and innovative" among the
> machine learning crowd... but the books are from 1987. I don't believe<= br> > I've seen these books mentioned in any recent bibliography.
> https://mitpress.mit.edu/b= ooks/parallel-distributed-processing-volume-1
>
>
>
>
> On 9/27/21, Tim Daly <axiomcas@gmail.com> wrote:
>> Greg Wilson asked "How Reliable is Scientific Software?"
>> https://neve= rworkintheory.org/2021/09/25/how-reliable-is-scientific-software.html >>
>> which is a really interesting read. For example"
>>
>>  [Hatton1994], is now a quarter of a century old, but its conc= lusions
>> are still fresh. The authors fed the same data into nine commercial=
>> geophysical software packages and compared the results; they found<= br> >> that, "numerical disagreement grows at around the rate of 1% in
= >> average absolute difference per 4000 fines of implemented code, and= ,
>> even worse, the nature of the disagreement is nonrandom" (i.e., the=
>> authors of different packages make similar mistakes).
>>
>>
>> On 9/26/21, Tim Daly <axiomcas@gmail.com> wrote:
>>> I should note that the lastest board I've just unboxed
>>> (a PYNQ-Z2) is a Zynq Z-7020 chip from Xilinx (AMD).
>>>
>>> What makes it interesting is that it contains 2 hard
>>> core processors and an FPGA, connected by 9 paths
>>> for communication. The processors can be run
>>> independently so there is the possibility of a parallel
>>> version of some Axiom algorithms (assuming I had
>>> the time, which I don't).
>>>
>>> Previously either the hard (physical) processor was
>>> separate from the FPGA with minimal communication
>>> or the soft core processor had to be created in the FPGA
>>> and was much slower.
>>>
>>> Now the two have been combined in a single chip.
>>> That means that my effort to run a proof checker on
>>> the FPGA and the algorithm on the CPU just got to
>>> the point where coordination is much easier.
>>>
>>> Now all I have to do is figure out how to program this
>>> beast.
>>>
>>> There is no such thing as a simple job.
>>>
>>> Tim
>>>
>>>
>>> On 9/26/21, Tim Daly <axiomcas@gmail.com> wrote:
>>>> I'm familiar with most of the traditional approaches
>>>> like Theorema. The bibliography contains most of the
>>>> more interesting sources. [0]
>>>>
>>>> There is a difference between traditional approaches to
= >>>> connecting computer algebra and proofs and my approach.
= >>>>
>>>> Proving an algorithm, like the GCD, in Axiom is hard.
>>>> There are many GCDs (e.g. NNI vs POLY) and there
>>>> are theorems and proofs passed at runtime in the
>>>> arguments of the newly constructed domains. This
>>>> involves a lot of dependent type theory and issues of
>>>> compile time / runtime argument evaluation. The issues
>>>> that arise are difficult and still being debated in the typ= e
>>>> theory community.
>>>>
>>>> I am putting the definitions, theorems, and proofs (DTP) >>>> directly into the category/domain hierarchy. Each category<= br> >>>> will have the DTP specific to it. That way a commutative >>>> domain will inherit a commutative theorem and a
>>>> non-commutative domain will not.
>>>>
>>>> Each domain will have additional DTPs associated with
>>>> the domain (e.g. NNI vs Integer) as well as any DTPs
>>>> it inherits from the category hierarchy. Functions in the >>>> domain will have associated DTPs.
>>>>
>>>> A function to be proven will then inherit all of the releva= nt
>>>> DTPs. The proof will be attached to the function and
>>>> both will be sent to the hardware (proof-carrying code). >>>>
>>>> The proof checker, running on a field programmable
>>>> gate array (FPGA), will be checked at runtime in
>>>> parallel with the algorithm running on the CPU
>>>> (aka "trust down to the metal"). (Note that Intel
>>>> and AMD have built CPU/FPGA combined chips,
>>>> currently only available in the cloud.)
>>>>
>>>>
>>>>
>>>> I am (slowly) making progress on the research.
>>>>
>>>> I have the hardware and nearly have the proof
>>>> checker from LEAN running on my FPGA.
>>>>
>>>> I'm in the process of spreading the DTPs from
>>>> LEAN across the category/domain hierarchy.
>>>>
>>>> The current Axiom build extracts all of the functions
>>>> but does not yet have the DTPs.
>>>>
>>>> I have to restructure the system, including the compiler >>>> and interpreter to parse and inherit the DTPs. I
>>>> have some of that code but only some of the code
>>>> has been pushed to the repository (volume 15) but
>>>> that is rather trivial, out of date, and incomplete.
>>>>
>>>> I'm clearly not smart enough to prove the Risch
>>>> algorithm and its associated machinery but the needed
>>>> definitions and theorems will be available to someone
>>>> who wants to try.
>>>>
>>>> [0] https://github.com/daly/PD= FS/blob/master/bookvolbib.pdf
>>>>
>>>>
>>>> On 8/19/21, Tim Daly <axiomcas@gmail.com> wrote:
>>>>> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
>>>>>
>>>>> REVIEW (Axiom on WSL2 Windows)
>>>>>
>>>>>
>>>>> So the steps to run Axiom from a Windows desktop
>>>>>
>>>>> 1 Windows) install XMing on Windows for X11 server
>>>>>
>>>>> http://www.straightrunning.com/XmingNote= s/
>>>>>
>>>>> 2 WSL2) Install Axiom in WSL2
>>>>>
>>>>> sudo apt install axiom
>>>>>
>>>>> 3 WSL2) modify /usr/bin/axiom to fix the bug:
>>>>> (someone changed the axiom startup script.
>>>>> It won't work on WSL2. I don't know who or
>>>>> how to get it fixed).
>>>>>
>>>>> sudo emacs /usr/bin/axiom
>>>>>
>>>>> (split the line into 3 and add quote marks)
>>>>>
>>>>> export SPADDEFAULT=3D/usr/local/axiom/mnt/linux
>>>>> export AXIOM=3D/usr/lib/axiom-20170501
>>>>> export "PATH=3D/usr/lib/axiom-20170501/bin:$PATH"
>>>>>
>>>>> 4 WSL2) create a .axiom.input file to include startup c= mds:
>>>>>
>>>>> emacs .axiom.input
>>>>>
>>>>> )cd "/mnt/c/yourpath"
>>>>> )sys pwd
>>>>>
>>>>> 5 WSL2) create a "myaxiom" command that sets the
>>>>>     DISPLAY variable and starts axiom >>>>>
>>>>> emacs myaxiom
>>>>>
>>>>> #! /bin/bash
>>>>> export DISPLAY=3D:0.0
>>>>> axiom
>>>>>
>>>>> 6 WSL2) put it in the /usr/bin directory
>>>>>
>>>>> chmod +x myaxiom
>>>>> sudo cp myaxiom /usr/bin/myaxiom
>>>>>
>>>>> 7 WINDOWS) start the X11 server
>>>>>
>>>>> (XMing XLaunch Icon on your desktop)
>>>>>
>>>>> 8 WINDOWS) run myaxiom from PowerShell
>>>>> (this should start axiom with graphics available)
>>>>>
>>>>> wsl myaxiom
>>>>>
>>>>> 8 WINDOWS) make a PowerShell desktop
>>>>>
>>>>> https://superuser.com/questions/886951/run-powershell-script-when-you-op= en-powershell
>>>>>
>>>>> Tim
>>>>>
>>>>> On 8/13/21, Tim Daly <axiomcas@gmail.com> wrote:
>>>>>> A great deal of thought is directed toward making t= he SANE version
>>>>>> of Axiom as flexible as possible, decoupling mechan= ism from theory.
>>>>>>
>>>>>> An interesting publication by Brian Cantwell Smith [= 0], "Reflection
>>>>>> and Semantics in LISP" seems to contain interesting= ideas related
>>>>>> to our goal. Of particular interest is the ability t= o reason about
>>>>>> and
>>>>>> perform self-referential manipulations. In a depend= ently-typed
>>>>>> system it seems interesting to be able "adapt" code= to handle
>>>>>> run-time computed arguments to dependent functions.= The abstract:
>>>>>>
>>>>>>    "We show how a computational system ca= n be constructed to
>>>>>> "reason",
>>>>>> effectively
>>>>>>    and consequentially, about its own inf= erential processes. The
>>>>>> analysis proceeds in two
>>>>>>    parts. First, we consider the general q= uestion of computational
>>>>>> semantics, rejecting
>>>>>>    traditional approaches, and arguing th= at the declarative and
>>>>>> procedural aspects of
>>>>>>    computational symbols (what they stand= for, and what behaviour
>>>>>> they
>>>>>> engender) should be
>>>>>>    analysed independently, in order that t= hey may be coherently
>>>>>> related. Second, we
>>>>>>    investigate self-referential behavior i= n computational processes,
>>>>>> and show how to embed an
>>>>>>    effective procedural model of a comput= ational calculus within that
>>>>>> calculus (a model not
>>>>>>    unlike a meta-circular interpreter, bu= t connected to the
>>>>>> fundamental operations of the
>>>>>>    machine in such a way as to provide, a= t any point in a
>>>>>> computation,
>>>>>> fully articulated
>>>>>>    descriptions of the state of that comp= utation, for inspection and
>>>>>> possible modification). In
>>>>>>    terms of the theories that result from= these investigations, we
>>>>>> present a general architecture
>>>>>>    for procedurally reflective processes,= able to shift smoothly
>>>>>> between dealing with a given
>>>>>>    subject domain, and dealing with their= own reasoning processes
>>>>>> over
>>>>>> that domain.
>>>>>>
>>>>>>    An instance of the general solution is= worked out in the context
>>>>>> of
>>>>>> an applicative
>>>>>>    language. Specifically, we present thr= ee successive dialects of
>>>>>> LISP: 1-LISP, a distillation of
>>>>>>    current practice, for comparison purpo= ses; 2-LISP, a dialect
>>>>>> constructed in terms of our
>>>>>>    rationalised semantics, in which the c= oncept of evaluation is
>>>>>> rejected in favour of
>>>>>>    independent notions of simplification a= nd reference, and in which
>>>>>> the respective categories
>>>>>>    of notation, structure, semantics, and= behaviour are strictly
>>>>>> aligned; and 3-LISP, an
>>>>>>    extension of 2-LISP endowed with refle= ctive powers."
>>>>>>
>>>>>> Axiom SANE builds dependent types on the fly. The a= bility to access
>>>>>> both the refection
>>>>>> of the tower of algebra and the reflection of the t= ower of proofs at
>>>>>> the time of construction
>>>>>> makes the construction of a new domain or specific a= lgorithm easier
>>>>>> and more general.
>>>>>>
>>>>>> This is of particular interest because one of the e= fforts is to build
>>>>>> "all the way down to the
>>>>>> metal". If each layer is constructed on top of prev= ious proven layers
>>>>>> and the new layer
>>>>>> can "reach below" to lower layers then the tower of= layers can be
>>>>>> built without duplication.
>>>>>>
>>>>>> Tim
>>>>>>
>>>>>> [0], Smith, Brian Cantwell "Reflection and Semantic= s in LISP"
>>>>>> POPL '84: Proceedings of the 11th ACM SIGACT-SIGPLA= N
>>>>>> ymposium on Principles of programming languagesJanu= ary 1
>>>>>> 984 Pages 23=E2=80=9335https://doi.org/1= 0.1145/800017.800513
>>>>>>
>>>>>> On 6/29/21, Tim Daly <axiomcas@gmail.com> wrote:
>>>>>>> Having spent time playing with hardware it is p= erfectly clear that
>>>>>>> future computational mathematics efforts need t= o adapt to using
>>>>>>> parallel processing.
>>>>>>>
>>>>>>> I've spent a fair bit of time thinking about st= ructuring Axiom to
>>>>>>> be parallel. Most past efforts have tried to fo= cus on making a
>>>>>>> particular algorithm parallel, such as a matrix= multiply.
>>>>>>>
>>>>>>> But I think that it might be more effective to m= ake each domain
>>>>>>> run in parallel. A computation crosses multiple= domains so a
>>>>>>> particular computation could involve multiple p= arallel copies.
>>>>>>>
>>>>>>> For example, computing the Cylindrical Algebrai= c Decomposition
>>>>>>> could recursively decompose the plane. Indeed, a= ny tree-recursive
>>>>>>> algorithm could be run in parallel "in the larg= e" by creating new
>>>>>>> running copies of the domain for each sub-probl= em.
>>>>>>>
>>>>>>> So the question becomes, how does one manage th= is?
>>>>>>>
>>>>>>> A similar problem occurs in robotics where one c= ould have multiple
>>>>>>> wheels, arms, propellers, etc. that need to act= independently but
>>>>>>> in coordination.
>>>>>>>
>>>>>>> The robot solution uses ROS2. The three ideas a= re ROSCORE,
>>>>>>> TOPICS with publish/subscribe, and SERVICES wit= h request/response.
>>>>>>> These are communication paths defined between p= rocesses.
>>>>>>>
>>>>>>> ROS2 has a "roscore" which is basically a phone= book of "topics".
>>>>>>> Any process can create or look up the current a= ctive topics. eq:
>>>>>>>
>>>>>>>    rosnode list
>>>>>>>
>>>>>>> TOPICS:
>>>>>>>
>>>>>>> Any process can PUBLISH a topic (which is basic= ally a typed data
>>>>>>> structure), e.g the topic /hw with the String d= ata "Hello World".
>>>>>>> eg:
>>>>>>>
>>>>>>>    rostopic pub /hw std_msgs/String "= Hello, World"
>>>>>>>
>>>>>>> Any process can SUBSCRIBE to a topic, such as /= hw, and get a
>>>>>>> copy of the data.  eg:
>>>>>>>
>>>>>>>    rostopic echo /hw   =3D=3D= > "Hello, World"
>>>>>>>
>>>>>>> Publishers talk, subscribers listen.
>>>>>>>
>>>>>>>
>>>>>>> SERVICES:
>>>>>>>
>>>>>>> Any process can make a REQUEST of a SERVICE and= get a RESPONSE.
>>>>>>> This is basically a remote function call.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> Axiom in parallel?
>>>>>>>
>>>>>>> So domains could run, each in its own process. I= t could provide
>>>>>>> services, one for each function. Any other proc= ess could request
>>>>>>> a computation and get the result as a response.= Domains could
>>>>>>> request services from other domains, either wai= ting for responses
>>>>>>> or continuing while the response is being compu= ted.
>>>>>>>
>>>>>>> The output could be sent anywhere, to a termina= l, to a browser,
>>>>>>> to a network, or to another process using the p= ublish/subscribe
>>>>>>> protocol, potentially all at the same time sinc= e there can be many
>>>>>>> subscribers to a topic.
>>>>>>>
>>>>>>> Available domains could be dynamically added by= announcing
>>>>>>> themselves as new "topics" and could be dynamic= ally looked-up
>>>>>>> at runtime.
>>>>>>>
>>>>>>> This structure allows function-level / domain-l= evel parallelism.
>>>>>>> It is very effective in the robot world and I t= hink it might be a
>>>>>>> good structuring mechanism to allow computation= al mathematics
>>>>>>> to take advantage of multiple processors in a d= isciplined fashion.
>>>>>>>
>>>>>>> Axiom has a thousand domains and each could run= on its own core.
>>>>>>>
>>>>>>> In addition. notice that each domain is indepen= dent of the others.
>>>>>>> So if we want to use BLAS Fortran code, it coul= d just be another
>>>>>>> service node. In fact, any "foreign function" c= ould transparently
>>>>>>> cooperate in a distributed Axiom.
>>>>>>>
>>>>>>> Another key feature is that proofs can be "by n= ode".
>>>>>>>
>>>>>>> Tim
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On 6/5/21, Tim Daly <axiomcas@gmail.com> wrote:
>>>>>>>> Axiom is based on first-class dependent typ= es. Deciding when
>>>>>>>> two types are equivalent may involve comput= ation. See
>>>>>>>> Christiansen, David Thrane "Checking Depend= ent Types with
>>>>>>>> Normalization by Evaluation" (2019)
>>>>>>>>
>>>>>>>> This puts an interesting constraint on buil= ding types. The
>>>>>>>> constructed types has to export a function t= o decide if a
>>>>>>>> given type is "equivalent" to itself.
>>>>>>>>
>>>>>>>> The notion of "equivalence" might involve c= ategory ideas
>>>>>>>> of natural transformation and univalence. S= igh.
>>>>>>>>
>>>>>>>> That's an interesting design point.
>>>>>>>>
>>>>>>>> Tim
>>>>>>>>
>>>>>>>>
>>>>>>>> On 5/5/21, Tim Daly <axiomcas@gmail.com> wrote:
>>>>>>>>> It is interesting that programmer's eye= s and expectations adapt
>>>>>>>>> to the tools they use. For instance, I u= se emacs and expect to
>>>>>>>>> work directly in files and multiple buf= fers. When I try to use one
>>>>>>>>> of the many IDE tools I find they tend t= o "get in the way". I
>>>>>>>>> already
>>>>>>>>> know or can quickly find whatever they t= ry to tell me. If you use
>>>>>>>>> an
>>>>>>>>> IDE you probably find emacs "too sparse= " for programming.
>>>>>>>>>
>>>>>>>>> Recently I've been working in a sparse p= rogramming environment.
>>>>>>>>> I'm exploring the question of running a= proof checker in an FPGA.
>>>>>>>>> The FPGA development tools are painful a= t best and not intuitive
>>>>>>>>> since you SEEM to be programming but yo= u're actually describing
>>>>>>>>> hardware gates, connections, and timing= . This is an environment
>>>>>>>>> where everything happens all-at-once an= d all-the-time (like the
>>>>>>>>> circuits in your computer). It is the "= assembly language of
>>>>>>>>> circuits".
>>>>>>>>> Naturally, my eyes have adapted to this= rather raw level.
>>>>>>>>>
>>>>>>>>> That said, I'm normally doing literate p= rogramming all the time.
>>>>>>>>> My typical file is a document which is a= mixture of latex and
>>>>>>>>> lisp.
>>>>>>>>> It is something of a shock to return to= that world. It is clear
>>>>>>>>> why
>>>>>>>>> people who program in Python find lisp t= o be a "sea of parens".
>>>>>>>>> Yet as a lisp programmer, I don't even s= ee the parens, just code.
>>>>>>>>>
>>>>>>>>> It takes a few minutes in a literate do= cument to adapt vision to
>>>>>>>>> see the latex / lisp combination as nat= ural. The latex markup,
>>>>>>>>> like the lisp parens, eventually just d= isappears. What remains
>>>>>>>>> is just lisp and natural language text.=
>>>>>>>>>
>>>>>>>>> This seems painful at first but eyes qu= ickly adapt. The upside
>>>>>>>>> is that there is always a "finished" do= cument that describes the
>>>>>>>>> state of the code. The overhead of writ= ing a paragraph to
>>>>>>>>> describe a new function or change a par= agraph to describe the
>>>>>>>>> changed function is very small.
>>>>>>>>>
>>>>>>>>> Using a Makefile I latex the document t= o generate a current PDF
>>>>>>>>> and then I extract, load, and execute t= he code. This loop catches
>>>>>>>>> errors in both the latex and the source= code. Keeping an open file
>>>>>>>>> in
>>>>>>>>> my pdf viewer shows all of the changes i= n the document after every
>>>>>>>>> run of make. That way I can edit the bo= ok as easily as the code.
>>>>>>>>>
>>>>>>>>> Ultimately I find that writing the book= while writing the code is
>>>>>>>>> more productive. I don't have to rememb= er why I wrote something
>>>>>>>>> since the explanation is already there.=
>>>>>>>>>
>>>>>>>>> We all have our own way of programming a= nd our own tools.
>>>>>>>>> But I find literate programming to be a= real advance over IDE
>>>>>>>>> style programming and "raw code" progra= mming.
>>>>>>>>>
>>>>>>>>> Tim
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On 2/27/21, Tim Daly <axiomcas@gmail.com> wrote:<= br> >>>>>>>>>> The systems I use have the interest= ing property of
>>>>>>>>>> "Living within the compiler".
>>>>>>>>>>
>>>>>>>>>> Lisp, Forth, Emacs, and other syste= ms that present themselves
>>>>>>>>>> through the Read-Eval-Print-Loop (R= EPL) allow the
>>>>>>>>>> ability to deeply interact with the= system, shaping it to your
>>>>>>>>>> need.
>>>>>>>>>>
>>>>>>>>>> My current thread of study is softw= are architecture. See
>>>>>>>>>> https://www.youtube.com/watch?v=3DW2hagw1VhhI&feature=3Dyoutu.be<= /a>
>>>>>>>>>> and
https://www.george= fairbanks.com/videos/
>>>>>>>>>>
>>>>>>>>>> My current thinking on SANE involve= s the ability to
>>>>>>>>>> dynamically define categories, repr= esentations, and functions
>>>>>>>>>> along with "composition functions" t= hat permits choosing a
>>>>>>>>>> combination at the time of use.
= >>>>>>>>>>
>>>>>>>>>> You might want a domain for handlin= g polynomials. There are
>>>>>>>>>> a lot of choices, depending on your= use case. You might want
>>>>>>>>>> different representations. For exam= ple, you might want dense,
>>>>>>>>>> sparse, recursive, or "machine comp= atible fixnums" (e.g. to
>>>>>>>>>> interface with C code). If these do= n't exist it ought to be
>>>>>>>>>> possible
>>>>>>>>>> to create them. Such "lego-like" bu= ilding blocks require careful
>>>>>>>>>> thought about creating "fully facto= red" objects.
>>>>>>>>>>
>>>>>>>>>> Given that goal, the traditional ba= rrier of "compiler" vs
>>>>>>>>>> "interpreter"
>>>>>>>>>> does not seem useful. It is better t= o "live within the compiler"
>>>>>>>>>> which
>>>>>>>>>> gives the ability to define new thi= ngs "on the fly".
>>>>>>>>>>
>>>>>>>>>> Of course, the SANE compiler is goi= ng to want an associated
>>>>>>>>>> proof of the functions you create a= long with the other parts
>>>>>>>>>> such as its category hierarchy and r= epresentation properties.
>>>>>>>>>>
>>>>>>>>>> There is no such thing as a simple j= ob. :-)
>>>>>>>>>>
>>>>>>>>>> Tim
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On 2/18/21, Tim Daly <axiomcas@gmail.com> wro= te:
>>>>>>>>>>> The Axiom SANE compiler / inter= preter has a few design points.
>>>>>>>>>>>
>>>>>>>>>>> 1) It needs to mix interpreted a= nd compiled code in the same
>>>>>>>>>>> function.
>>>>>>>>>>> SANE allows dynamic constructio= n of code as well as dynamic type
>>>>>>>>>>> construction at runtime. Both o= f these can occur in a runtime
>>>>>>>>>>> object.
>>>>>>>>>>> So there is potentially a mixtu= re of interpreted and compiled
>>>>>>>>>>> code.
>>>>>>>>>>>
>>>>>>>>>>> 2) It needs to perform type res= olution at compile time without
>>>>>>>>>>> overhead
>>>>>>>>>>> where possible. Since this is n= ot always possible there needs to
>>>>>>>>>>> be
>>>>>>>>>>> a "prefix thunk" that will perf= orm the resolution. Trivially,
>>>>>>>>>>> for
>>>>>>>>>>> example,
>>>>>>>>>>> if we have a + function we need= to type-resolve the arguments.
>>>>>>>>>>>
>>>>>>>>>>> However, if we can prove at com= pile time that the types are both
>>>>>>>>>>> bounded-NNI and the result is b= ounded-NNI (i.e. fixnum in lisp)
>>>>>>>>>>> then we can inline a call to + a= t runtime. If not, we might have
>>>>>>>>>>> + applied to NNI and POLY(FLOAT= ), which requires a thunk to
>>>>>>>>>>> resolve types. The thunk could e= ven "specialize and compile"
>>>>>>>>>>> the code before executing it. >>>>>>>>>>>
>>>>>>>>>>> It turns out that the Forth imp= lementation of
>>>>>>>>>>> "threaded-interpreted"
>>>>>>>>>>> languages model provides an eff= icient and effective way to do
>>>>>>>>>>> this.[0]
>>>>>>>>>>> Type resolution can be "inserte= d" in intermediate thunks.
>>>>>>>>>>> The model also supports dynamic= overloading and tail recursion.
>>>>>>>>>>>
>>>>>>>>>>> Combining high-level CLOS code w= ith low-level threading gives an
>>>>>>>>>>> easy to understand and robust d= esign.
>>>>>>>>>>>
>>>>>>>>>>> Tim
>>>>>>>>>>>
>>>>>>>>>>> [0] Loeliger, R.G. "Threaded In= terpretive Languages" (1981)
>>>>>>>>>>> ISBN 0-07-038360-X
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On 2/5/21, Tim Daly <axiomcas@gmail.com> w= rote:
>>>>>>>>>>>> I've worked hard to make Ax= iom depend on almost no other
>>>>>>>>>>>> tools so that it would not g= et caught by "code rot" of
>>>>>>>>>>>> libraries.
>>>>>>>>>>>>
>>>>>>>>>>>> However, I'm also trying to= make the new SANE version much
>>>>>>>>>>>> easier to understand and de= bug.To that end I've been
>>>>>>>>>>>> experimenting
>>>>>>>>>>>> with some ideas.
>>>>>>>>>>>>
>>>>>>>>>>>> It should be possible to vi= ew source code, of course. But the
>>>>>>>>>>>> source
>>>>>>>>>>>> code is not the only, nor p= ossibly the best, representation of
>>>>>>>>>>>> the
>>>>>>>>>>>> ideas.
>>>>>>>>>>>> In particular, source code g= ets compiled into data structures.
>>>>>>>>>>>> In
>>>>>>>>>>>> Axiom
>>>>>>>>>>>> these data structures reall= y are a graph of related structures.
>>>>>>>>>>>>
>>>>>>>>>>>> For example, looking at the= gcd function from NNI, there is the
>>>>>>>>>>>> representation of the gcd f= unction itself. But there is also a
>>>>>>>>>>>> structure
>>>>>>>>>>>> that is the REP (and, in th= e new system, is separate from the
>>>>>>>>>>>> domain).
>>>>>>>>>>>>
>>>>>>>>>>>> Further, there are associat= ed specification and proof
>>>>>>>>>>>> structures.
>>>>>>>>>>>> Even
>>>>>>>>>>>> further, the domain inherit= s the category structures, and from
>>>>>>>>>>>> those
>>>>>>>>>>>> it
>>>>>>>>>>>> inherits logical axioms and= definitions through the proof
>>>>>>>>>>>> structure.
>>>>>>>>>>>>
>>>>>>>>>>>> Clearly the gcd function is= a node in a much larger graph
>>>>>>>>>>>> structure.
>>>>>>>>>>>>
>>>>>>>>>>>> When trying to decide why c= ode won't compile it would be useful
>>>>>>>>>>>> to
>>>>>>>>>>>> be able to see and walk the= se structures. I've thought about
>>>>>>>>>>>> using
>>>>>>>>>>>> the
>>>>>>>>>>>> browser but browsers are to= o weak. Either everything has to be
>>>>>>>>>>>> "in
>>>>>>>>>>>> a
>>>>>>>>>>>> single tab to show the grap= h" or "the nodes of the graph are in
>>>>>>>>>>>> different
>>>>>>>>>>>> tabs". Plus, constructing d= ynamic graphs that change as the
>>>>>>>>>>>> software
>>>>>>>>>>>> changes (e.g. by loading a n= ew spad file or creating a new
>>>>>>>>>>>> function)
>>>>>>>>>>>> represents the huge problem= of keeping the browser "in sync
>>>>>>>>>>>> with
>>>>>>>>>>>> the
>>>>>>>>>>>> Axiom workspace". So someth= ing more dynamic and embedded is
>>>>>>>>>>>> needed.
>>>>>>>>>>>>
>>>>>>>>>>>> Axiom source gets compiled i= nto CLOS data structures. Each of
>>>>>>>>>>>> these
>>>>>>>>>>>> new SANE structures has an a= ssociated surface representation,
>>>>>>>>>>>> so
>>>>>>>>>>>> they
>>>>>>>>>>>> can be presented in user-fr= iendly form.
>>>>>>>>>>>>
>>>>>>>>>>>> Also, since Axiom is litera= te software, it should be possible
>>>>>>>>>>>> to
>>>>>>>>>>>> look
>>>>>>>>>>>> at
>>>>>>>>>>>> the code in its literate fo= rm with the surrounding explanation.
>>>>>>>>>>>>
>>>>>>>>>>>> Essentially we'd like to ha= ve the ability to "deep dive" into
>>>>>>>>>>>> the
>>>>>>>>>>>> Axiom
>>>>>>>>>>>> workspace, not only for deb= ugging, but also for understanding
>>>>>>>>>>>> what
>>>>>>>>>>>> functions are used, where t= hey come from, what they inherit,
>>>>>>>>>>>> and
>>>>>>>>>>>> how they are used in a comp= utation.
>>>>>>>>>>>>
>>>>>>>>>>>> To that end I'm looking at u= sing McClim, a lisp windowing
>>>>>>>>>>>> system.
>>>>>>>>>>>> Since the McClim windows wo= uld be part of the lisp image, they
>>>>>>>>>>>> have
>>>>>>>>>>>> access to display (and modi= fy) the Axiom workspace at all
>>>>>>>>>>>> times.
>>>>>>>>>>>>
>>>>>>>>>>>> The only hesitation is that= McClim uses quicklisp and drags in
>>>>>>>>>>>> a
>>>>>>>>>>>> lot
>>>>>>>>>>>> of other subsystems. It's a= ll lisp, of course.
>>>>>>>>>>>>
>>>>>>>>>>>> These ideas aren't new. The= y were available on Symbolics
>>>>>>>>>>>> machines,
>>>>>>>>>>>> a truly productive platform= and one I sorely miss.
>>>>>>>>>>>>
>>>>>>>>>>>> Tim
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> On 1/19/21, Tim Daly <axiomcas@gmail.com= > wrote:
>>>>>>>>>>>>> Also of interest is the= talk
>>>>>>>>>>>>> "The Unreasonable Effec= tiveness of Dynamic Typing for
>>>>>>>>>>>>> Practical
>>>>>>>>>>>>> Programs"
>>>>>>>>>>>>> https://vimeo.com/74354= 480
>>>>>>>>>>>>> which questions whether= static typing really has any benefit.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Tim
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> On 1/19/21, Tim Daly &l= t;axiomcas@gmail.com= > wrote:
>>>>>>>>>>>>>> Peter Naur wrote an= article of interest:
>>>>>>>>>>>>>> http:= //pages.cs.wisc.edu/~remzi/Naur.pdf
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> In particular, it m= irrors my notion that Axiom needs
>>>>>>>>>>>>>> to embrace literate= programming so that the "theory
>>>>>>>>>>>>>> of the problem" is p= resented as well as the "theory
>>>>>>>>>>>>>> of the solution". I= quote the introduction:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> This article is, to= my mind, the most accurate account
>>>>>>>>>>>>>> of what goes on in d= esigning and coding a program.
>>>>>>>>>>>>>> I refer to it regul= arly when discussing how much
>>>>>>>>>>>>>> documentation to cr= eate, how to pass along tacit
>>>>>>>>>>>>>> knowledge, and the v= alue of the XP's metaphor-setting
>>>>>>>>>>>>>> exercise. It also p= rovides a way to examine a methodolgy's
>>>>>>>>>>>>>> economic structure.=
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> In the article, whi= ch follows, note that the quality of the
>>>>>>>>>>>>>> designing programme= r's work is related to the quality of
>>>>>>>>>>>>>> the match between h= is theory of the problem and his theory
>>>>>>>>>>>>>> of the solution. No= te that the quality of a later
>>>>>>>>>>>>>> programmer's
>>>>>>>>>>>>>> work is related to t= he match between his theories and the
>>>>>>>>>>>>>> previous programmer= 's theories.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Using Naur's ideas,= the designer's job is not to pass along
>>>>>>>>>>>>>> "the design" but to= pass along "the theories" driving the
>>>>>>>>>>>>>> design.
>>>>>>>>>>>>>> The latter goal is m= ore useful and more appropriate. It also
>>>>>>>>>>>>>> highlights that kno= wledge of the theory is tacit in the
>>>>>>>>>>>>>> owning,
>>>>>>>>>>>>>> and
>>>>>>>>>>>>>> so passing along th= e thoery requires passing along both
>>>>>>>>>>>>>> explicit
>>>>>>>>>>>>>> and tacit knowledge= .
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Tim
>>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>
= --Apple-Mail-18CBE148-0FFC-47D9-9D95-5D7EFE73CB6B--