I've been examining formal proving in linear log=
ic for the reason that I am tired of having my programs crash without a goo=
d way to plan the clean up.

I have an idea going about building a programming language around proof=
presentation of programs. Eg. You'd encode addition with: (n &&=
; n (+);n). The system would print it into a proof with semicolon replaced =
by a horizontal line.

Th=
ere seem to be few benefits from doing it like this. It helps in focusing o=
n what you prove with the program. You get more proof irrelevance and the p=
rogramming environment can therefore help more in writing the programs.

Did a little experiment wit=
h proving and modeling the hanoi puzzle on paper. It ended up describing mo=
vement like this:

forall=
a, b, c:disc. forall i, j:peg,

(top(a, i), top(c, j=
), above(a, b), !(a<c)) -o (top(b, I), top(a, j), above(a, c))

Next you'd compose this into =
itself to get more proofs and end up with an induction scheme to move any s=
tack anywhere. It also holds for pegs with no items as empty is not above a=
nything else.

Induction =
would be also used to describe the stacks of discs in the puzzle. It'd =
be:

exists a:disc. stack=
(a) =3D

=C2=A0 (exists b:disc. above(a, b), !(a<b=
), stack(b)) | !(forall b:disc. b<a))

The type notation would be sor=
t of like rewriting/space saving rules. I haven't decided up on how typ=
es are represented yet. I think this recursion scheme is not good but didn&=
#39;t figure a better out yet.

Some of these ideas do remind me from Axiom CAS. The types seem to b=
ind down the meaning here as well and actions seem to be conveniently defin=
ed on types.

Also have i=
deas on logic programming env that interleaves on this and uses the same sy=
stem of logic. It could be used to do programs that only run if they succee=
d to satisfy the goal.

I=
t takes a bit of effort to make the programming env to test these ideas. I&=
#39;ve been working on this for a while now.

-- Henri Tuhola=C2=A0

ti=
11. kes=C3=A4k. 2019 klo 15.06 Tim Daly <axiomcas@gmail.com> kirjoitti:

You are probably not old enough to remember 'core' memor= y.

It used to be the case that you could turn off a computer and

then, when you turned it on again, the memory image was

exactly remembered (in "magnetic core") and your program

would continue running exactly as though nothing happened.

Of course, a BIG memory machine had 8,192 bytes.

NVRAM (non-volitile RAM) is coming, in near terabyte sizes.

Unlike DRAM (dynamic RAM) it remembers everything even

though it has no power.

That changes everything we know about computing.

https://www.youtube.com/watch?v=3DkBXbump35d= g

https://www.youtube.com/watch?v=3DX8nMsabFD2= w

For example, data structures no longer needs a "disk

representation" since they never leave memory. On the

other hand, neither does a mangled data structure or a

memory leak.

Lisp is well suited for this environment. Lisp with a form of

persistent data structures (Okasaki)=C2=A0 would be even more

interesting.

This has interesting implications for the new "Sane" Axiom

compiler / interpreter. For example, an "input" file actually

resides in interpreter memory at all times.

If nothing else, these videos are an interesting glimpse of

near-future computing issues.

Tim

_______________________________________________

Axiom-developer mailing list

Axiom-developer@nongnu.org

https://lists.nongnu.org/mailm= an/listinfo/axiom-developer