|Subject:||Re: [Axiom-developer] Proving Axiom Correct|
|Date:||Fri, 6 Apr 2018 05:23:08 +0000|
Thanks again for taking the time to explain your efforts and to further my understanding on the issue of proving "down to the metal". By following all the leads you gave, I had a quick course.
Unfortunately, despite the tremendous efforts in the computing industry to assure us of correctness ("proven" by formalism), at least from what you wrote (which I understand was not meant to be comprehensive), not only are those efforts piecewise, they also
concentrate on quite limited aspects.
My comments are in regular font; italicized paragraphs are quoted passages, isolated
italics and highlights are mine. Itemized headings are from your email.
1. BLAS/LAPACK: they use a lot of coding tricks to avoid overflow/underflow/significance loss/etc.
Coding tricks are adverse to proofs by formal logics, or at least such code makes it practically impossible to assure correctness. Most of the time, these tricks are patches to deal with post-implementation revealed bugs (whose discoveries are more from
real-life usage than from program proving).
2. Field Programmable Gate Array (FPGA)
These are great at the gate level and of course, theoretically, they are the basic blocks in building Turing machines (practically, finite state machines or FSMs). Mealy/Moore state machines are just two ways to look at FSMs; I read
www-inst.eecs.berkeley.edu/~cs150/fa05/Lectures/07-SeqLogicIIIx2.pdfand there are nice examples illustrating the steps to construct FSMs (a bit of a nostalgic trip to revisit Karnaugh maps I learned in the 1970s) . I assume these applications can all be automated and proven correct once the set of specifications for the finite state machine to perform a task is given but the final correctness still depend on a proven set of specifications! As far as I can discern, specifications are done manually since they are task dependent.
As an example, before proving that a compiler is correct implemented, one needs to specify the language and the compiling algorithm (which of course, can be and have been done, like YACC). Despite all the verification and our trust in the proof algorithms and implementations, there remains a small probability that something may still be amiss in the specifications, like an unanticipated but grammatically correct input is diagnosed as an error. We have all seen compiler error messages that do not pinpoint where the error originated.
3. ProvenVisor on ARMs. http://www.provenrun.com/products/provenvisor/nxp/
I read that, and my understanding is that these proven microkernels are concerned with security (both from external and from internal threats) in multicore architectures (which are prevalent in all CPU designs nowadays) and multi- and coexisting OSes. Even under such a general yet limited aspect of "proven correctness" (by formalism no less), there is no guarantee (paragraph under: Formally Proven Security):
In order to achieve the highest level of security, ProvenVisor uses a microkernel architecture implemented using formally proven code to get as close as possible to zero-defects, to guarantee the expected security properties and to ease the path to any required certifications. This architecture and the formal proofs insure the sustainability of the maintenance process of systems based on ProvenVisor. ...
This may be legalese, but from the highlighted phrases clearly show that the goal is not "proven and complete specifications" on security. Even the formally proven code does not guarantee zero-defects on expected violations. It is only a "best effort" (which still is commendable). The scope is also limited:
Prove & Run’s formal software development toolchain. This means that it is mathematically proven that virtual machines (VMs) hosted by ProvenVisor will always retain their integrity (no other VM can tamper with their internal data) and confidentiality (no other VM can read their internal data). A misbehaving or malicious OS has no way to modify another OS or to spy on another OS.
A malicious program need not run in a hosted OS or VM if it gains access to the microkernel, say with an external hardware (and external software) that can modify it. After all, there has to be such equipment to test whether the microkernel is working or not and to apply patches if need be.
And a major "professional service" offered is:
Revamping existing architectures for security with ad-hoc solutions: Secure Boot, secure Over-the-Air firmware update, firewalling, intrusion detection/protection solutions, authentication, secure storage, etc…
Does "ad-hoc solutions" mean patches?
4. The issue of Trust: If you can't trust the hardware gates to compute a valid AND/OR/NOT then all is lost.
Actually, I not only trust, but also believe in the correctness, or proof thereof, of gate-level logic or a microkernel, but that is a far far cry from, say, my trust in the correctness of an implementation of the Risch algorithm or Kovacic's algorithm. The complexity of coding high level symbolic algebraic methods, when traced down to the metal, as you say, is beyond current proof technology (nor is there sufficient interest in the hardware industry to provide that level of research). Note that the industry is still satisfied with "ad-hoc solutions" (and that might mean patches, and we all know how error-prone those are---so much so that people learn and re-invent the wheel over and over again for a better wheel).
Can prove-technology catch up, ever?
I know I can't catch up. Still ignorant and biased.
Department of Mathematics
The City College of The City University of New York
New York, NY 10031
From: Tim Daly <address@hidden>
Sent: Thursday, April 5, 2018 2:59 AM
To: William Sit
Cc: axiom-dev; Tim Daly
Subject: Re: [Axiom-developer] Proving Axiom Correct
William,I understand the issue of proving "down to the metal".
Axiom's Volume 10.5 has my implementation of the BLAS / LAPACK
libraries in Common Lisp. Those libraries have a lot of coding tricks
to avoid overflow/underflow/significance loss/etc.
Two years ago I got Gustafson's "End of Error" book. His new floating
This is not yet published work.
The game is to implement the instructions at the hardware gate level
has a verified hypervisor running on the ARM core
So I've looked at the issue all the way down to the gate-level hardware
which is boolean logic and Turing machine level proofs.
It all eventually comes down to trust but I'm not sure what else I can do
to ensure that the proofs are trustworthy. If you can't trust the hardware
gates to compute a valid AND/OR/NOT then all is lost.
On Wed, Apr 4, 2018 at 6:03 PM, William Sit <address@hidden> wrote:
|[Prev in Thread]||Current Thread||[Next in Thread]|