l4-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: confinement with endogenous verification ??


From: Jonathan S. Shapiro
Subject: Re: confinement with endogenous verification ??
Date: Wed, 26 Oct 2005 20:27:51 -0400

On Thu, 2005-10-27 at 00:35 +0100, Brian Brunswick wrote:
> I've been lurking for this whole discussion, and would like to thank
> all the participants for some interesting reading.
> 
> 
> One question I would like to ask is about the motivation for this so
> called "endogenous" verification, in EROS and others, and potentially
> future-hurd.
> 
> I understand this to mean that its possible within the system to
> calculate (using a transitive completion of the capability relation)
> that something cannot reach a particular other capability. This
> involves categorising the capabilities into "secure" and "insecure"
> ones in some way, and only following the "insecure" ones.

Close, but not quite. It is possible to compute whether a newly
instantiated subsystem will be confined at the time of instantiation,
and it is possible for the requester (the program requesting that the
subsystem be instantiated) to learn whether this will be so.

Separately, there is an authentication mechanism. In practice, this is
used to authenticate storage sources.

> So my question is: how much is this used in EROS etc, how useful is
> it, and how reliable - does it actually prove what it purports to, or
> are there problems with the classification?

Let me answer these out of order.

1. How much is this used in EROS?

The space bank authentication is used EVERY TIME we create a process. It
is utterly impossible to build a program that makes any guarantees about
behavior at all if it does not know that it has exclusive access to its
own storage. In the discussion, I described the protection of encryption
keys as an example, but the issue is more general: if you can touch my
data in ways that are beyond my control, what possible guarantee of
execution behavior can I make?

The confinement check is not used as much. The shell does this check all
the time, but most other programs do not bother except in special
circumstances, such as execution of untrusted code. This is true partly
because the test is transitive, so the check performed by the shell is
sufficient to cover an entire program.

While the *check* is not used widely, nearly all programs in EROS are in
fact confined. I say this because it is important to your question about
how useful it is.

2. How useful is it?

This is hard to measure quantitatively.

As an architect, I would say that the most important value of
confinement is that it imposes a discipline on system design. Once
confinement becomes a design consideration, programmers begin to ask how
to design their programs in ways that permit them to be confined. This
has several consequences:

  Fault isolation is naturally achieved, which improves testability
    and recoverability
  Application factoring becomes more natural. This supports better
    design practices, which tends to make applications more robust.
  Both users and programmers get to stop worrying, because security
    becomes the default.

So I would say that the important impact is the impact on system
structure. On the other hand, I would also say that in practice, real
developers cannot achieve this kind of system structure in practice
unless the conditions are checked. This appears to be supported by the
experiences of other groups, most notably the AS/400 history, the
Sigma-7 history, and the 5-ESS switching system history.

In less abstract benefits, we know that it is almost impossible to
create a back door or a store a virus for later use in such a system,
and this makes compromise extremely difficult to spread.

What would it be worth to you to be able to run an email agent that ran
code provided by some arbitrary, hostile third party, and **never have
to worry** about what it did to your system?

I can tell you that for my father it would mean a great deal, and he is
far more typical of real users than any of us are.

3. Does it actually achieve what it purports to do?

Yes, it does. We have a formal verification proof that the algorithm is
correct, and a fairly serious model checking effort on the real code
confirming that the formal model is satisfied.

Is it possible that there are bugs? Certainly. But this is *much* better
than the situation in many other systems. In POSIX, it is *not* possible
that there are bugs, because there is no comprehensive definition of
correct behavior. POSIX does not have bugs. Only irritations.

Without a "gold standard", POSIX cannot ever be correct. EROS can.

> We might think of it as akin to so called "introspection" that is
> possible in the run-time systems of certain languages (java), and used
> to support dynamic loading and configuration of system components.

Perhaps, except that the constructor check is performed statically, and
introspection is generally very problematic if security is a design
goal.


shap





reply via email to

[Prev in Thread] Current Thread [Next in Thread]