help-gnu-emacs
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: CVE-2017-14482 - Red Hat Customer Portal


From: Mario Castelán Castro
Subject: Re: CVE-2017-14482 - Red Hat Customer Portal
Date: Fri, 29 Sep 2017 15:21:59 -0500
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.3.0

I am sorry for the delayed reply. I (incorrectly) assumed that nobody
would still be interested in this conversation and thus I did not check
this thread until today.

On 26/09/17 18:31, Óscar Fuentes wrote:
> Well, as a humble software developer who is happy whenever I get things
> done "well enough", let me ask a few simple questions. You can simply
> respond with "yes" or "no", if you wish:
> 
> * Is there a proof that demonstrates that every program admits a
>   verifiable formal specification, or a method for knowing in advance
>   which programs admit such specification?

If interpreted literally, the answer is trivial: Yes, for any program
you can formulate formal specifications. For example, the specification
that the program always terminates is meaningful for any program.

It seems that what you really mean is “can *any* informal specification
be converted into a formal specification?”.

In short: No, because it may be too vague but IF you can unambiguously
describe the allowed behaviors of a program, THEN you can formalize it.

To be a bit more specific: IF you can write an “evaluator program” that
takes an execution trace (a description of how the program that is
supposed to meet the informal specification behaved in a concrete run)
and says either "good" or "bad" depending on whether *that* trace is
acceptable according to your specification, THEN you can formalize the
specification implicitly defined by your program in any proof assistant
(because you can, at the very least, simply describer your program
within the formal logic).

The converse is not true; you can express *more* specifications in
formal logic than the ones for which an evaluator program can be written.

When wondering whether X property can be formalized, ask yourself
whether you can write a program that evaluates this property.

Now think of this specification (literally *this* string): “THE PROGRAM
DOES WHAT I WANT IT TO DO”. This is too vague (both for an human and for
formal logic) and can not be formalized.

> * Is there a proof that demonstrates that the verification will take a
>   maximum amount of computing resources? (let's say polinomial on some
>   metric of the program's code).

The question can be interpreted in at least 2 ways:

(1): If you mean the computer time to verify an *already done* proof:

It is worth noting that when computer-verified proofs are involved, an
human usually writes a *sketch of proof* that can be seen as a guide for
the computer to generate the actual proof and check that it is valid.
The actual proofs (in the underlying logic) involve a lot of repetitive
work that can be automated. By writing only a proof sketch instead of
the full proof this work can be left to the computer.

In practice, proofs of the aforementioned type are fast to verify; the
time to verify is roughly proportional to the total length of all
*proofs* (not the number of theorems). It does not grow exponentially
(or something like that) because the human who writes the proof must do
the “reasoning” that requires “insight”, “creativity”, etc.

For example, in HOL4 <https://hol-theorem-prover.org/> and HOL Light
<http://www.cl.cam.ac.uk/~jrh13/hol-light/> the user rarely writes
actual proofs; instead he writes programs that *generate* the proofs.
For each step in the actual proof a function call is made to a module
called “Thm” (theorem) requesting that this step is performed. If and
only if all steps are valid, then an object of type “thm” (theorem type)
is generated. Thus, if there are “bugs” the program can fail to produce
a theorem object, but it will not produce a “false” theorem.

On the other hand, in Metamath
<http://us.metamath.org/mpeuni/mmset.html> the user must write the whole
proof, not just a proof sketch.

(2): If you mean the time that it would take a computer to verify that a
real-life program meets a specification *without the human giving any
advice*, then this would be unpractical for real-life software. That is
why a human must write a proof/proof sketch. If you are interested in
addressing this question from a *purely theoretical* view, then search
the mathematical *literature* (Wikipedia does not count) for Rice theorem.

> * Are there reliable methods for estimating the amount of man/hours
>   (give or take one order of magnitude) required for creating a
>   verifiable formal specification from some informal specification
>   language used in the industry?

I do not know of such a method. My guess is a person that is
knowledgeable in *both* formal verification and the area of the program
to be verified (e.g.: avionics, kernels, compilers, etc.) can give an
educated estimate is. You may want to check about existing projects of
formalizations of informal specifications.

> * Are there studies that, based on past practice, demonstrate that the
>   formal approach is more economic than the traditional one, either
>   along the development phase or along the complete life cycle of the
>   software?

When reliability is the top priority, formal verification is the only
option. Hence, no comparison of cost can be done

As for programs where reliability is not a priority, I have seen some
studies comparing cost, but I do not recall the full reference. A search
in your preferred academic search engine should give some relevant results.

But like I said already, for me a program that lacks a proof of
correctness is unfinished, just like a program that lacks documentation
is unfinished.

> Categorizing software as "cheap" or "reliable" misses the point
> entirely. Either the software is adequate or not. And the adequateness
> criteria varies wildly, but two things are always present: if your
> sofware is more expensive than the value it provides, it is inadequate;
> if your software takes too much time to develop, it is inadequate.

One can speak of “the money the user is willing to pay for the
software”, but speaking of “the *value* of the software” is so vague
that its meaningless.

Imagine a car-manufacturing company. If the cars use
“drive-through-wire” (i.e.: the user only controls the input to a
computer, not the actual mechanical controls of the car; those are
controlled by the computer), then what is the *value* of proving correct
the software of the computerized control system? An owner of the care
probably would say that the company must pay whatever it costs, because
his life is at risk. The stock holders will disagree.

> Many years ago I felt in love with mathematical logic and devoted many
> many (that's two manys) hours to its study. In the end I decided that
> was impractical and sterile and forgot almost everything.

Of course. Anybody studying mathematics (by which I mean deductive
sciences in general) without an interest of learning it *for its own
sake*, regardless of any practical application, is doomed to failure.

> Since,
> automatic theorem provers got much better to the point of being usable
> for certain types of problems.

Note that proof assistants are different form automated theorem provers.

Proof assistants verify your proof and tell you what you need to prove
as you write the proof sketch (mentioned above). With automated theorem
provers, you give them a conjecture and axioms and they either provide a
proof, terminate without proof, or fail to terminate. In practice there
is limited (so far) application of automated theorem proving for proof
assistants but the situation is improving. For example, HOL4 has several
automated provers built-in; Several are specific-purpose; for example,
arithmetic decision procedures. MESON and METIS are general purpose
provers but they can only solve simple goals. The tool holyhammer which
is part of HOL4 can interface with external provers.

> Right now, I'm looking forward to
> implementing dependent types on my programming language. I have Coq and
> Idris […]

Note that those are not automated theorem provers. Coq and Idris are
proof verifiers/proof assistants and programming languages (not all
proof verifiers are programming languages; these are specific cases).

> With age, I turned skeptic about new things. If formal methods is the
> silver bullet, the proponents are those who must provide evidence […]

I mentioned to you some verified software. The problem is not that the
people advancing formal methods are failing, but that typical
programmers are way too stupid, unmotivated, lazy and overall mediocre
(just like the population in general).

Such is the mediocrity that typical C programs are full of UD behavior
(e.g.: type punning and undefined behavior with pointer arithmetic). Do
you think that the same programmers who write this code (that obviously
do not even bother studying the details of their programming language)
will bother learning serious mathematics? Obviously not.

Writing provably correct software is much harder than writing buggy
software. Nearly all programmers write buggy software and count bugs as
a “normal thing” just like rain.

Programming is easy; doing it right is hard, very hard (and note that
testing is not “doing it right”).

-- 
Do not eat animals; respect them as you respect people.
https://duckduckgo.com/?q=how+to+(become+OR+eat)+vegan

Attachment: signature.asc
Description: OpenPGP digital signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]