bug-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: The Hurd: what is it?


From: Marcus Brinkmann
Subject: Re: The Hurd: what is it?
Date: Thu, 10 Nov 2005 00:01:43 +0100
User-agent: Wanderlust/2.14.0 (Africa) SEMI/1.14.6 (Maruoka) FLIM/1.14.7 (Sanjō) APEL/10.6 Emacs/21.4 (i386-pc-linux-gnu) MULE/5.0 (SAKAKI)

At Wed, 09 Nov 2005 13:45:01 -0800,
Thomas Bushnell BSG wrote:
> Marcus Brinkmann <marcus.brinkmann@ruhr-uni-bochum.de> writes:
> > This is why in FUSE, users don't see the user filesystems of other
> > users.  I am afraid that given the seriousness of the problem, this is
> > the only sane option.  Only with a broader semantic framework can you
> > re-enable sharing on a case by case basis.
> 
> This is of course the exact wrong answer, for just the same reason
> that you didn't like the Hurd's absolute separation as the way to get
> chroot jails.  The point is to have safe sharing, not isolation.

Yes, I agree :)

> The problem is that people are used to filesystem APIs being of a
> certain kind of trustability.  

This is true, thank you for the observation.  I think it hits the nail
on the head.  In fact, what we have here is a "liability inversion".
Before, the kernel was liable to provide safe and stable filesystems.
Now this liability is in user-space filesystems, potentially from an
untrusted source.

There is to my knowledge only three ways to safe sharely:

Either I trust the source of the service.  Somebody who I love dearly
gives me a capability for a love-letter.  I will simply use it, and
not worry further.

Or I can verify that the implementation of the service is known to be
good.  For example, somebody gives me a capability.  If I can _verify_
that this is a capability to the root filesystem, I am ok, because I
trust my root filesystem.

The third case is the hardest.  If I can't trust the source, and can't
trust the implementation, the only thing I can do is to be careful,
and try to contain the damage that can follow from using the service.
I have to be extremely careful in using the results of the operations.
They may not be correct, and they may be inreliable over time (ie, I
might get the right answer 100 times, and then the wrong answer).

This case is the hardest, because it does require special application
support inusing the capability.  There are no general rules for coping
with potentially bad results.  But we _do_ know how to contain the
damage that such a service can do to us in every other way.

What follows now is a description of a mechanism in EROS to do
extremely safe sharing.  I will put the disadvantage up front: This
mechanism is so secure that it even allows implementation of DRM
techniques, execution of code without revealing the content of the
text and data section, and more.  However, it also allows to implement
systems in which privacy can be enforced, with TC (TCPA) even privacy
protection against the administrator.  So far the social aspects.

The mechanism is from EROS and works like this: If I want to give you
access to my service, I create a "constructor".  A constructor is
itself a service, created by using the meta-constructor (a constructor
constructor service which is part of the TCB).  A constructor starts
in an unsealed initial state.  In this state, it allows a couple of
operations: You can put in binary content (text/data), and you can
insert capabilities (initial ports, if you want).  Also, you can seal
the constructor.  After it is sealed, its state can not be changed any
longer.  Then the constructor provides a new operation: Instantiation.
Common initial capabilities would include resource reserves (space
bank, scheduler).  There _can_ be additional arbitrary capabilities.

Now, I have to give you some more information: The resource reserves
are special capabilities that are implemented by the kernel.  They are
defined in a way that allows you to enforce them to be read-only,
transitively, by setting a capability "permission flag".

The constructor can look at all the initial capabilities, and look if
all of them are known trusted kernel-implemented objects in read-only
mode.  If this is the case, then we say that the program is
"confined", because it can not leak information to the outside world.
It simply doesn't have any capability to do so.

I can give a constructor to a "client" task.  The client task can
check if the capability is indeed a trustworthy constructor created by
the meta-constructor by asking the meta-constructor about it (the
meta-constructor uses a kernel-operation called "branding" to identify
capabilities created by it).  If it is indeed a trustworthy
constructor, I _know_ its implemenation, and thus it is safe to use
it.  I do not know the implementation of the program the constructor
contains, though.  I can now make two calls to the constructor: I can
ask it if the program contained is _confined_ in its initial
capability set.  The answer is yes or no.  I do not need to ask, I
only need to ask if I worry about information and capability leakage.

The second operation is the actual operation that instantiates a copy
of the program (ie, its a spawn-like operation).

If the program is confined, I still do not know if it behaves
correctly, or if the result of operations can be trusted.  But I do
know that it can only do what I allow it to do.  There is nothing
going on "behind my back" that can hurt me.

On the other hand, the service provider knows that I can not inspect
his service, and can not get my hands on the initial capabilities in
the program, which may contain something like private keys for
encryption.

Note that the implementation requires that I can identify capabilities
I created, even if I get a copy in some other way later on.  It also
requires that there is a known set of kernel objects that enforce a
transitive read-only attribute.  This is not much, but it is not
nothing either.  The rest is simple user-space stuff.

Note also that there is an analogy to the constructor mechanism in the
Hurd in how filesystems implement the suid functionality.
Constructors are a generalization.

> > Mach doesn't know on whose behalf the server does the allocation.
> 
> I had thought that the solution here was to have a "resource
> allocation" handle, and users would provide it to servers, who would
> then use it to get resources.
> 
> There are problems here with respect to untrusted servers, of course!

Right.  And wit untrusted clients, as the clients must retain the
ability to revoke the resources they give to the server.

It turns out that writing multi-user servers is really hard to do
safely.  So, I think that more and more services will be private.  For
example, there is no real reason to have a globally shared pipe
server.  Sharing becomes more explicit, and thus easier to manage.
For example, if you want to share a pipe between two users (think
"echo hallo | su cat"), then I think it is feasible that you create a
special pipe server just for that one pipe.

The big exception always seems to be the filesystem, but in a
persistent system, the filesystem becomes secondary.  In a persistent
system, the main shared service is the page allocator (space bank in
EROS), and that is a system service which better is written very
carefully :)

Thanks,
Marcus





reply via email to

[Prev in Thread] Current Thread [Next in Thread]