[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Axiom-developer] [EXTERNAL] Re: Axiom's Sane redesign musings

From: Tim Daly
Subject: Re: [Axiom-developer] [EXTERNAL] Re: Axiom's Sane redesign musings
Date: Sat, 29 Jun 2019 21:17:04 -0400

One major Sane design decision is the use of CLOS,
the Common Lisp Object System.

First, since each CLOS object is a type it is possible to
create strong types everywhere. This helps with the ultimate
need to typecheck the compiler and the generated code.

Second, CLOS is an integral part of common lisp. One of
the Sane design goals is to make it possible to use Axiom's
domains in ordinary lisp programs. Since Axiom is nothing
more than a domain specific language on common lisp it
makes sense to construct it so users can freely intermix
polynomials with non-algebraic code.

Third, CLOS is designed for large program development,
hiding most of the implementation details and exposing
a well-defined API. This will make future maintenance and
documentation of Axiom easier, contributing to its longer
intended life.

So for a traditional Axiom user nothing seems to have
changed. But for future users it will be easy to compute
an integral in the middle of regular programs.


On 6/27/19, Tim Daly <address@hidden> wrote:
> Another thought....
> There has been a "step change" in computer science in the last few years.
> Guy Steele did a survey of the use of logic notation in conference papers.
> More than 50% of the papers in some conferences use logic notation
> (from one of the many logics).
> CMU teaches their CS courses all based on and requiring the use of
> logic and the associated notation. My college mathematics covered
> the use of truth tables. The graduate course covered the use of
> Karnaugh maps.
> Reading current papers, I have found several papers with multiple
> pages containing nothing but "judgements", pages of greek notation.
> If you think Axiom's learning curve is steep, you should look at
> Homotopy Type Theory (HoTT).
> I taught a compiler course at Vassar in the previous century but
> the Dragon book didn't cover CIC in any detail and I would not
> have understood it if it did.
> The original Axiom software predates most of the published logic
> papers, at least as they applied to software. Haskell Curry wrote
> from the logic side in 1934 and William Howard published in 1969
> but I only heard of the Curry-Howard correspondence in 1998.
> Writing a compiler these days requires the use of this approach.
> In all, that's a good thing as it makes it clear how to handle types
> and how to construct software that is marginally more correct.
> The new Sane compiler is building on these logic foundations,
> based on the Calculus of Inductive Construction and Dependent
> Type theory. The compiler itself is strongly typed as is the
> language it supports.
> Since dependent types are not decidable there will always be
> heuristics at runtime to try to disambiguate types, only now we
> will have to write the code in greek :-)
> Tim
> On 6/26/19, Tim Daly <address@hidden> wrote:
>> Yes, but even more than that.
>> The goal is to make Axiom a "trusted system" for the whole of
>> computational mathematics.
>> To do this, consider this (not entirely random) collection of systems
>> and facts:
>> 1) Trusted systems are built using the de Bruijn principle. See, for
>> example,
>> The basic idea is to have a trusted kernel that is composed of
>> a small number of logic operations known to be correct. All other
>> operations get "compiled" into these trusted operations. This is
>> how the proof systems operate, by choosing a logic such as CIC
>> and then constructing a CIC kernel.
>> 2) Axiom is really nothing more that a "Domain Specific Language"
>> on top of common lisp. For Axiom to be trusted, the lisp must be
>> trusted, which the whole of common lisp is not.
>> 3) We would like a common lisp that is proven "down to the
>> metal". There is a lisp (not common lisp) kernel called Milawa
>> that has this property. It starts with a proven kernel and builds
>> up layers (currently 11) towards ACL2, a common lisp theorem
>> prover.
>> 4) There is a theorem prover called Matita which appears to be
>> of interest as another "intermediate step". See
>> 5) There is also a newly forming Lean prover and I am trying
>> to follow along and get up to speed on it: See:
>> 5) There is the mistaken belief that common lisp is a typeless
>> language. However, CLOS has defclass and every defclass
>> constructs a new type. So by careful design it is possible to
>> construct a "fully typed" domain specific language and compiler
>> in common lisp.
>> So what does this mean for a Sane Axiom system?
>> The game is to build the Sane compiler using CLOS so it is
>> strongly typed. The output of the compiler generates lisp code
>> that conforms to some (possibly higher, Axiom-specific) layer
>> of the Matita lisp. So compiler output would be in a provably
>> type-safe subset of lisp.
>> In addition, the Sane compiler is being constructed so that
>> the compiler itself can be proven correct. Everything in the
>> compiler is strongly typed as is its output.
>> Ultimately the goal is a proven Sane compiler that accepts
>> a provable Sane language for computer algebra which generates
>> proven computer algebra code running on a lisp that is
>> proven "down to the metal".
>> The end result is a trusted system for computational mathematics.
>> Like the blind philosophers, I can grab any part of this
>> elephantine system and describe it in great detail. My struggle
>> is to "envision the whole", make it sensible, and then construct
>> it in a provable way.
>> I've been at this for about 7 years now. I took 10 classes at
>> CMU. read several hundred papers (see the Axiom bibliography
>> volume,
>> and spent a few years "coming up to speed on the
>> proof theory" side of computational mathematics. Now I
>> finally have (poor) command of computer algebra, proof
>> theory, and provable programming. Having all the parts,
>> the question is "Can I construct the elephant?". I don't know.
>> But I am trying.
>> Tim
>> On 6/26/19, Martin Baker <address@hidden> wrote:
>>> Tim,
>>> When I read this I am picking up a slightly different flavor than what I
>>> got from your first message in the thread.
>>> You first message seemed to me to be about "merging computer algebra and
>>> proof technology" but this message seems to be about running the two in
>>> parallel.
>>> I really like the idea of merging the two because Axiom has a steep
>>> learning curve and proof assistants have a steep learning curve. If they
>>> were merged the leaning curve would still be steep but at least there
>>> would only be one.
>>> This seems to be more about:
>>> * specifying axioms (and other identities?) in the category.
>>> * decorating the domain with some denotational semantics stuff.
>>> then checking it, at compile time, by running a separate proof assistant
>>> program.
>>> Is this what you are suggesting.
>>> Martin
>>> On 26/06/2019 01:16, Tim Daly wrote:
>>>> Martin,
>>>> My current "line of attack" on this research is to try to prove the
>>>> GCD algorithm in NonNegativeInteger.
>>>> While this seems trivial in proof systems it is expensive to
>>>> compute from the inductive definition. While this seems
>>>> trivial in computer algebra, the implementation code lacks
>>>> proof.
>>>> There are several steps I'm taking. I'm creating a new compiler
>>>> that handles Spad code with several new additions. The new
>>>> language (and the effort in general) is called "Sane".
>>>> One step is to make sure the new compiler generates code
>>>> that runs in Axiom. This is challenging as there is almost no
>>>> documentation about Axiom internals so it all has to be
>>>> reverse-engineered.
>>>> Next is the addition of "axioms" to the categories, such as
>>>> adding axioms for equality to BasicType, where equality is
>>>> defined to include reflexive and transitive properties.
>>>> (Equality, by the way, is probably THE most controversial
>>>> topics in logic, c.f. the univalence axiom in HoTT). These
>>>> axioms decorate the category signatures and are inherited.
>>>> Next is the addition of axioms to domains, also decorating
>>>> the signatures with axioms.
>>>> Next is the logical specification of properties of the data
>>>> type implementation of the domain, called the REP in
>>>> Axiom and the "carrier" in logic. For example, think of a
>>>> binary tree REP and what properties you can guarantee.
>>>> Next is adding a specification for the operations that
>>>> implement the signatures. These are specific to each
>>>> function that a domain implements.
>>>> Next is decorating code with pre- and post-conditions
>>>> as well as loop invariants and storage invariants.
>>>> Next the collection of all of the above is cast into a form
>>>> used by a proof system (currently Lean) that implements
>>>> dependent types (Calculus of Inductive Construction).
>>>> Next a proof is constructed. The resulting proof is attached
>>>> to the Axiom code. Proof checking is wildly cheaper than
>>>> proof construction and the new Sane compiler would
>>>> perform proof checking at compile time for each function.
>>>> So if there is a breaking change somewhere in the tower
>>>> the proof would fail.
>>>> Challenges along the way, besides reverse-engineering
>>>> the Spad compiler, include adding languages for stating
>>>> axioms, for stating REP properties, for stating function
>>>> specifications, for stating program properties, and for
>>>> stating proof certificates. The pieces all exist somewhere
>>>> but they are not necessarily compatible, nor well defined.
>>>> Is this all possible to do? Well, of course, as this is all
>>>> "just mathematics". Do *I* know how to do this? Well,
>>>> of course not, which is what makes this a research effort.
>>>> Ultimately I'm trying to build an instance of merging proof
>>>> and computer algebra at a very deep, proven level. Think
>>>> of it as a PhD thesis project without the degree incentive :-)
>>>> Tim
>>>> On 6/25/19, Martin Baker <address@hidden> wrote:
>>>>> On 6/25/19, William Sit<address@hidden>  wrote:
>>>>>   > The expression  x = x + 0, whether treated as a type or an
>>>>> equation,
>>>>>   > can only make sense when x, =, + and 0 are clearly typed and
>>>>> defined.
>>>>>   > It makes little sense to me that this, as an equation, can be
>>>>> "proved"
>>>>>   > to be valid universally (that is, without the definition of, say
>>>>> +).
>>>>> If x is a natural number defined like this in Coq:
>>>>> Inductive nat : Set := O : nat | S : nat -> nat
>>>>> then x = x + 0 is not an axiom but is derivable.
>>>>> Of course this all depends on the structures and definitions, I didn't
>>>>> mean to imply that it is valid universally.
>>>>> On 25/06/2019 19:28, Tim Daly wrote:
>>>>>> The "elegant way" that Martin is questioning is the problem
>>>>>> of combining a certain kind of logic operation (refl aka
>>>>>> "reflection" where both sides are equal) with the notion of
>>>>>> a mathematical unit.
>>>>> I think that refl (short for "reflexivity" of = relation), is the
>>>>> usual
>>>>> abbreviation for the only constructor of an equality type in
>>>>> Martin-Lof
>>>>> type theory.
>>>>> I get the impression that this theory is very powerful in proof
>>>>> assistants and I am questioning if you proposing to build this into
>>>>> Axiom and how?
>>>>> Martin

reply via email to

[Prev in Thread] Current Thread [Next in Thread]