[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: setuid vs. EROS constructor

From: Jonathan S. Shapiro
Subject: Re: setuid vs. EROS constructor
Date: Thu, 13 Oct 2005 12:22:46 -0400

On Wed, 2005-10-12 at 19:43 -0700, Jun Inoue wrote:
> On Wed, 12 Oct 2005 15:38:10 -0400
> "Jonathan S. Shapiro" <address@hidden> wrote:
> > I found Bas's note stunning, because I did not expect anyone to connect
> > the dots about setuid vs. confinement so quickly. It is a point that
> > usually requires several explanations. Indeed, setuid is not required at
> > all in a capability system. The only thing that Bas missed is that if
> > you have persistence you do not need a constructor server.
> Do you mean we can run a "constructor process" indefinitely as a
> translator?  For example if I compile a program foo, which needs my
> capability bar, and make it accessible to everyone as /pub/jun/foo...


It is very hard to answer your question, because we are making very
different assumptions. Because EROS is persistent, the options in this
space are very different. We don't really have anything that is directly
comparable to the transition from passive to active links.

Here are the two options that exist in EROS:

1. I start a process, set it up however I want. Now that I
   have done this, I take my endpoint capability, and I
   simply bind it into a directory:

        dir->insert("foo", endpoint-cap);

   If you have read access to this directory, you can now
   fetch the cap.

2. I create a constructor that knows how to instantiate the
   "foo" program. I insert the constructor capability into
   the directory:

        dir->insert("foocap", endpoint-to-foo-constructor);

We do NOT have anything that would automatically instantiate a new copy
of foo when you open foocap. The problem is that the protocol for
starting a program requires providing a source of storage and a schedule

If we added such a function, we certainly would NOT do it in the file
system, because the file system does not (and should not) receive my
source of storage and/or my schedule. The closest we might come is to
add an advisory "active" bit to the directory entry so that my C library
might be told to transparently instantiate the "foo" program.

In general, however, this is EXTREMELY dangerous. What we are creating
here is a convention where you set a bit (the active bit) that my
library code will obey without consulting me. In effect, this means that
you get to take over my execution. There are certainly times that this
is an appropriate thing to do, but I don't think it is something that
should EVER be done transparently!

I am not sure whether this response makes things clearer, so I would
like to pause to get your reaction.

> But this scheme doesn't seem to leave room for others to verify
> if /pub/jun/foo would leak additional capabilities given to it.  How is
> the verification done on EROS?  Is it exact or conservative; ie can it
> have false positives?

My scheme, where it is done in the library, *would* allow this control.
We could say: "Auto-instantiate foo only if the constructor certifies
that the instance will be confined." If you really want transparent
instantiation, this is about the outer limit that can be done safely.

What it tests is whether the initial program image contains any
capability (excluding those that came from the instantiating requestor)
that would permit write authority. The test handles the transitive case,
so really, it tests whether any operation on those capabilities would
allow the new program to ever *obtain* write authority.

The confinement test is precise but conservative, because it is a static
test. It is possible to write programs that hold leaky capabilities but
do not use them. The constructor test will reject these programs,
because in general we don't have the technology to check this property

I'll be happy to describe the mechanism, but I think it is better to get
the idea across first.

> >   B. In the places where these applications require access to the
> >      user's resources, make sure that the user has to consent
> >      specifically. Our open/save-as mechanism is an example of this.
> Agreed.  The problem is how to get that consent non-intrusively in an
> extensible manner.  (I think extensibility is crucial here; see below.)
> What do I do if I'm not satisfied with the mechanism provided by the
> mediators I already have?  Do I trust it blindly?  Do I inspect code?
> It seems to me the number of components I must trust grows
> proportionately to the number of incompatible user interfaces.  Or are
> you saying that the number is managably small?

The number is manageably small. It is not driven by user interfaces.
Here is an intuition:

If you are already planning to hand a specific file to the sub-program,
simply open the file and pass the capability. There isn't any
negotiation required here, and you have already restricted access as far
down as you can practically go.

The problem cases come up when capabilities are aggregated in bundles.
For example, your personal directory contains a *lot* of capabilities.
The internet connection server can be imagined as containing an infinite
set of capabilities (One for each future connection. Obviously we will
not store all of these. :-)

So in practice, a mediating agent is required when an application has
justified need for some capability that is contained in one of these
aggregates. That is: mediating agents exist to guard aggregates.

There is one other case: a mediating agent must exist to restrict
communication across user-established confinement boundaries. In the
same way that you do not want XMMS scribbling on all of your files,
there is no reason why it should be sending arbitrary cut and paste
buffers to other programs through the window system. In the EROS Window
System, cut and paste still works, but *only* when the user has actually
executed the necessary actions. In X11, programs can do cut&paste
without the user ever seeing the interaction at all.

So yes: the guard agents can be seen as sub-programs that serve the

Concerning replacement:

The guards in the power box exist to protect the user from programs that
have excessive access. Assuming that a compiler is installed, there is
nothing to prevent the user from rewriting them or replacing them.

There *is* a *practical* impediment. In practice, application code
developers quickly come to know that there is a well-defined set of
guards that they need to be prepared to talk to. This has the effect of
freezing the guard interfaces for reasons of compatibility. This is true
in the same way that a user could recompile Linux and change the name of
the "open" system call, but there are probably a one or two programs
that would break. :-)

> > Why do I say this about viruses?
> Because they're a perfect example of what happens if the principle of
> least authority is violated.  Yes?

Actually, I think they are the second best example. The best example is
the ex-employee of AC Delco who erased the spark plug design database on
the day he was fired.


reply via email to

[Prev in Thread] Current Thread [Next in Thread]