[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: setuid vs. EROS constructor

From: Jonathan S. Shapiro
Subject: Re: setuid vs. EROS constructor
Date: Wed, 12 Oct 2005 15:38:10 -0400

I found Bas's note stunning, because I did not expect anyone to connect
the dots about setuid vs. confinement so quickly. It is a point that
usually requires several explanations. Indeed, setuid is not required at
all in a capability system. The only thing that Bas missed is that if
you have persistence you do not need a constructor server.

However: Bas has not taken the last step. If he digs a bit deeper he
will soon conclude that uids of any sort are a bad idea altogether.

One way to think about a UID is that it names a set of capabilities.
This is the total set of (object, permission) pairs that a given user
account is permitted to manipulate. Such a large set of capabilities is
intrinsically dangerous, simply because it is a lot of authority
gathered in one place. Still, there must exist *some* places where
exactly this authority is gathered. When I log in, I must have access to
my resources. For practical purposes this is the set of capabilities
named by my UID.

The problem with a UID based system is that this is the *only* set of
capabilities I can have. I cannot subdivide it effectively in order to
run programs with less than my total authority. There do exist a few
mechanisms in UNIX that allow *administrators* to subdivide authority in
very coarse ways, but there exist *none* that allow *users* (or their
agent programs) to do so. All of the attention has been given to
mandatory controls. If we want users to be able to defend themselves we
also need effective *discretionary* controls.

Another way to look at a UID is that it defines an equivalence class of
processes. All processes running under a given UID are equivalent for
purposes of permissions checks. Another way to look at this is that it
is impossible to make *distinctions* among these processes for purposes
of access checks. This is ultimately why a virus that infects your mail
agent can infect your files: it runs with all of your authority.

It is hypothetically possible to design a system in which UIDs are
dynamically created. This would allow me to allocate a new UID and give
it a subset of my authority. The practical impediment to this is that I
would need to run around to every object I can access and selectively
add the new sub-UID to the access list. As a practical matter, this is
simply too cumbersome to do dynamically. It is also difficult to handle
*removal* of UIDs in any convenient way. This mechanism probably *would*
work for authority subsets that can be described in a mostly-static way.

In my opinion, this doesn't go far enough. If you think about it, the
vast majority of my programs do not need access to my home directory --
or to the file system at all. Or to my process list, or to the network.
In practice, the vast majority of programs need access to a small
standard environment (input, output, error, perhaps a window) and a
short list of capabilities for the items that they are actually supposed
to manipulate. This suggests that a better thing to do is give to each
program exactly the authority that it needs when it needs it.

This is probably the area in EROS/Coyotos that is most important, and
where we have not yet been able to give adequate attention. Here is our
current best model, but I am sure it is flawed.

1. We assume that there is a limited number of process that serve as
"shells". The job of a shell is to be the user's agent in the
computational system. For practical purposes, the shell *is* the user
within the system.

A shell has complete and unrestricted access to the user's directories
and resources. It is intrinsic to the nature of a shell that a user must
trust their shell completely. 

Note that a shell is not necessary textual. Nautilus (or whatever you
use) is a shell in the sense that I am using the term.

2. There is a set of applications. With very few exceptions, we want to
treat these applications as "presumed hostile". Most of the time they
will be fine. Sometimes they will have bugs. Rarely they will really be
out to hurt us. Where these applications are concerned we have two

  A. Restrict these applications to the narrowest set of authorities
     that will let them do their job.

  B. In the places where these applications require access to the
     user's resources, make sure that the user has to consent
     specifically. Our open/save-as mechanism is an example of this.

3. There is a set of mediators. Each user has their own set. The purpose
of a mediator is to allow users to grant specific authority to untrusted
applications, but to do so intentionally.

Let me give an example of a mediator, because this idea sounds like it
should create a horrible overrun of "is this okay?" queries, but it does
not seem to in our limited experience.

You have all seen a conventional file open dialog box. There are three
differences in the EROS version:

  1. The rendering isn't done by the application.
  2. The code runs in a separate, user-supplied process.
  3. The return value is an open file capability, not a string.

Instead of calling a library routine to put up an open dialog box, the
library routine performs an RPC to the user-supplied "open/save-as
agent". This agent runs user-selected code and acts entirely on behalf
of the user. The agent has access to the user files. The word processor
does not. The agent interacts with the user to find out what file to
open, opens it, and passes the desriptor back to the file.

Of course, this means that the editor (or whatever) may destroy the
contents of this file. This is always a risk when you run a potentially
hostile program. We have never pretended that EROS removes all risk in
the world (what a *boring* place *that* would be!).

It *does* accomplish a couple of useful controls:

  1. The editor can only screw up the files it is handed by
     the user.
  2. Eventually, the user will notice that three files are
     screwed up, and it will occur to them that the editor
     touched all three.
  3. The propagation of viruses is (for practical purposes)

Why do I say this about viruses?

Imagine that your task is to propagate a virus in the system I have
described. The constraint is that you can only modify files that the
user has consented to let you write. Also, you can't just open a network

Give it a try. You'll do some harm, but you won't get anything *like*
the exponential propagation that current viruses enjoy. And with a
versioning file system, how much damage can you really do? How many
files do you think my grandmother will agree to write before they begin
to wonder what is going on? Reasonably structured applications simply
don't write more than one file when the user says "save". Regrettably I
no longer have any grandmothers to test this. Perhaps Ludovic will lend
me one briefly. :-) 

In the end, I'm really proposing a social attack on virus authors:
viruses that don't spread just aren't very much fun.

This pattern definitely is NOT enough to make a system perfectly safe.
The best that I know how to do in this situation is to make the system
survivable and recoverable.

By the way, notice that the protection provided by the open/save-as
agent is *only* possible because the parent process does *NOT* have
access to the internal state of its children!


reply via email to

[Prev in Thread] Current Thread [Next in Thread]