l4-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: design goals vs mechanisms (was: Re: Let's do some coding :-)


From: Jonathan S. Shapiro
Subject: Re: design goals vs mechanisms (was: Re: Let's do some coding :-)
Date: Wed, 26 Oct 2005 18:49:32 -0400

On Wed, 2005-10-26 at 23:28 +0200, Marcus Brinkmann wrote:
> Persistence "solves" this by first creating a checkpoint of the system
> in its initial state manually, then argueing about its validity and
> correctness manually, based on the static disk image, and then
> "starting" the system by simply running it.
> 
> In other words: Such a system is never booted, not even the first time
> it starts up.  The initial capabilities exist at all the right places
> with the right authorities at the moment you write the disk image out.

It should, however, be noted that this only reduces the problem to a
previously unsolved problem: determining that the initial system state
is secure. Fortunately, the initial system state is small enough to be
analyzable by hand.


I would also add that sustainable security is also a strong motivation
for atomic system calls. The story that a persistent system remains
secure amounts to:

  1. At step 0, the system was in an initially secure configuration.
     This must be established by inspection or verification.

  2. In all subsequent steps, the system made a transition from a
     secure state to a new secure state. This must be established
     by determining the correctness of both the kernel architecture
     and its implementation.

  3. The only states that are ever written down are consistent
     states, because the persistence mechanism is transcational.
     Therefore, the secure state induction continues across shutdown
     and restart.

If system calls are not atomic, it becomes nearly impossible (in
practice) to determine whether part (2) is satisfied. We actually have a
fairly good formal model for the EROS kernel architecture. Determining
whether non-atomic implementations actually implement that model is
currently intractable in a system of this complexity given currently
existing tools. I expect that this will not change in the 21st century.
The difficulty of general purpose code verification is *that* hard.

For comparison, note that the POSIX standard doesn't even *attempt* to
define what it means to be a secure system state in a POSIX system.
Pragmatically, the POSIX semantics is too complicated to ever have much
hope that such a definition would actually be correct, and there is
exactly ZERO chance that a given implementation of it could be
effectively checked. This may be why I resist the idea of POSIX as a
viable future foundation so hard.

I don't mean to suggest that POSIX is "bad" in any absolute sense. UNIX
was an *incredible* system in its day, and it remains pretty good now.
But the impracticality of defining what security might mean in a POSIX
system is a consequence of the system's design goals and principles. The
computational world has changed, and I believe that we need something
more to sustain us in the future.

shap





reply via email to

[Prev in Thread] Current Thread [Next in Thread]