l4-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Restricted storage


From: Marcus Brinkmann
Subject: Re: Restricted storage
Date: Thu, 01 Jun 2006 02:51:03 +0200
User-agent: Wanderlust/2.14.0 (Africa) SEMI/1.14.6 (Maruoka) FLIM/1.14.7 (Sanjō) APEL/10.6 Emacs/21.4 (i486-pc-linux-gnu) MULE/5.0 (SAKAKI)

At Wed, 31 May 2006 20:02:55 -0400,
Eric Northup <address@hidden> wrote:
> 
> On Wed, 2006-05-31 at 18:49, Bas Wijnen wrote:
> [...]
> > No, it's nonsense.  The program storing the encryption keys doesn't know if
> > the storage is opaque.  It doesn't care either.  It's the user who cares.  
> > And
> > it's the user who chooses to use opaque storage (or not).  The user can 
> > trust
> > that the program runs on opaque storage, not because the programmer 
> > guarantees
> > this (by putting a check in the program), but simply by providing opaque
> > storage to the program.  (Intentional side-effect is that storage which is
> > given to some other user cannot be checked for opaqueness.  This can be
> > "fixed", but I'd rather not do that if possible.)
> [...]
> 
> Which Object(s) in the system represent the user and her choices?

The answer is not at all different from EROS/Coyotos and systems with
"trusted computing" components, with or without remote attestation.
The difference is not which objects represent the user and their
choices[1], but where your confidence comes from that these objects
actually represent the users and their choices faithfully.  Formal
verification doesn't help in principle: You still have to put
confidence into the modellers and provers, and in the implementation
of the hardware.

"Trusted computing" components are not components that you _can_
trust, but which you _have to_ trust in order to have confidence.
They are the components that can break your security policy.

To illuminate these issues, just consider a very simple, desirable
property that you may want to prove: "Only the kernel code can run in
ring 0."  To prove this, you not only have to model the hardware
correctly, and make relevant statements about the software executing
on it, you also have to make sure that the model is correctly
implemented by the hardware.

The fallacy here is that some people seem to think that the "trusted
computing" component manufacturer and the operating system
implementors are somehow magically more "trustworthy" for the user
than local administrators and other agents.  If this is the case or
not, however, depends on a number of circumstances, and there is no
universally correct answer.  I have elaborated on this extensively in
my mail about "Ownership and Contracts".

Thanks,
Marcus





reply via email to

[Prev in Thread] Current Thread [Next in Thread]