[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Challenge: Find potential use cases for non-trivial confinement

From: Jonathan S. Shapiro
Subject: Re: Challenge: Find potential use cases for non-trivial confinement
Date: Mon, 01 May 2006 21:21:02 -0400

On Tue, 2006-05-02 at 02:39 +0200, Marcus Brinkmann wrote:

> (1) A program wants to monitor all communication going in and out of a
>     child program.  Thus, it needs to insert forwarding proxy
>     capabilities for all capabilities that the program has, and for
>     all capabilities that go through these channels.
>     This is what the rpctrace program does in the Hurd on Mach.
>     If the child has extra authority, I can not capture all the
>     information flow that actually happens.

For programs that are important enough and sensitive enough to justify
the use of unconfined capabilities, this is a bug, not a feature. There
are some things in a system that you really shouldn't be able to trace
in normal operation.

In all other cases, it is not necessary to monitor communications that
do not cross a confinement boundary in practice.

> (2) A program wants to run a child program in its own complete copy of
>     the operating system.  We call this a sub-hurd.
>     If the child has extra authority, it escapes the sub-hurd.

Yes, but see my comments above concerning which types of programs will
actually trigger this in practice.

Socially, designers are *very* reluctant to build unconfined
constructors in a system that permits confinement to be tested, because
most programs simply refuse to run them altogether.

> (3) A program wants to replace a selected system service with an
>     alternative implementation for the child program.  For example, a
>     program may want to run a child program with its own space bank
>     implementation to change the way resource accounting works
>     (throtteling, soft limits, etc).
>     If the child has extra authority, it may refuse to run the parents
>     implementation.

This is a fundamental security violation. If you are not in a position
to replace the program entirely, you quite properly do not have
sufficient authority to violate its invariants.

> (4) Fakeroot in the Hurd on Mach used a proxy filesystem server to
>     fake root access for file manipulation.  A similar pattern can be
>     used to fake write access to read-only parts of the filesystem.
>     In a similar spirit, fakeauth used a proxy authentication server
>     to fake the root user id 0.
>     If the child has extra authority, it may escape the
>     parent-provided filesystem (if it is not confined), or refuse to
>     use it.

It seems to me that this is the third time you have used this example.
Once was enough.

> (5) Any type of debugging and reverse engineering.

Absolutely false. The correct statement is: any type of *nonconsensual*
debugging or reverse engineering. You don't get to stick probes up my
ass without my permission either. In both cases, you are fundamentally
choosing to ignore the importance of consent.

> The system hierarchy is flat,
> because through their constructor, all processes have direct access to
> at least some system-capabilities that can not be virtualized.

Please name one.

In practice, there are perhaps three, none of which can be usefully
virtualized in any case:

   The universal invalid capability (which performs no operations, and
   so doesn't *need* to be virtualized.

   Discrim, which identifies *categories* of capabilities. Please go
   look carefully at the categories -- they differentiate things that
   cannot be virtualized for other reasons (in particular, memory
   objects cannot be perfectly virtualized for several reasons).
   It is in fact possible to perform every operation supported by
   discrim through other means.

   In EROS: Number capabilities. This does not apply in Coyotos

   I think there was one other, equally low level and uninteresting.

> Note that virtualization of trusted computing hardware is an unsolved
> problem in academia.  Some people at one of the multinationals
> probably have figured out how to do it, but they do not publish their
> results to my knowledge.

If you include covert timing issues, no, *they* haven't figured that out
either, but then, they haven't solved it on the *base* system in the
first place. Everything else is actually fairly straightforward to
virtualize, and separation in hardware is now done. Look at the work on
the Rockwell Collins AAMP-7 processor and its associated OS.

> The system
> should allow debugging by default, and the user should not
> involuntarily give up this right.  I believe it should be hard to give
> up these rights,...

I believe you mean to say: the system should establish complete
disclosure as the default, and should be goddamn close to impossible for
any normal user to do anything about it.

I can picture the marketing slogan now:

        Hurd: Non-consensual Coed Naked *Everything*

Well. it will certainly be popular with 13 year olds until they figure
out that they can't get any (ahem) photographic content without the DRM
stuff enabled.


reply via email to

[Prev in Thread] Current Thread [Next in Thread]