l4-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

The Perils of Pluggability (was: capability authentication)


From: Jonathan S. Shapiro
Subject: The Perils of Pluggability (was: capability authentication)
Date: Sun, 09 Oct 2005 13:23:29 -0400

On Sun, 2005-10-09 at 10:14 +0200, ness wrote:
> I guess one of the design goals of the Hurd is to NOT depend on the
> implementation of a server. As far as I know, we don't want to ask "is
> the implementation of this server trustible?" but ask "is the source
> where I got this cap trustible?". We want to allow the user to replace
> system components. To e.g. run a new task that uses a different proc
> server. So the user says that to it's shell and the shell gives the
> right cap to the newly created task. But marcus identified sth. like
> your "identify" operation as necessary, AFAIK.

Yes. I had some of this discussion with Neal and Marcus at LSM. The
problem with this design goal is that it is unachievable in a robust or
secure system. I understand that it is not a goal for Hurd to be secure
in the sense that Coyotos tries to be, but if you achieve your goal
above you will manage to be *insecure* in the way that Windows is. We do
not need another Windows in the world.

I would like to propose for consideration a new social objective for
Hurd: Hurd should be a system that my mother (or your mother) can use
without fear. When something goes wrong, or something is compromised,
the damage should be well contained. The user's naive assumptions about
what is safe should be mostly right. I want to propose that this social
goal should be more important than arbitrary and unmotivated
flexibility. I do not suggest that flexibility or plugability should be
abandoned. I suggest that plugability in the wrong places is a lot like
poking holes in your condom, and when we do this we want to have other
forms of protection in place.

This suggests an obvious choice of system icon, and I think we should
not use that one for Hurd. :-)

Suppose I hold an IOstream capability. What do I actually hold? I hold a
capability naming some body of code that *alleges* to properly implement
the IOstream interface. Without more information, here are some things
that I do not know:

  1. I do not know that it *actually* does these operations. The
     interface is an approximate statement of expected behavior,
     but it is *very* approximate, and even if the desired
     behavior were fully captured there is no way to check that
     the program behind the interface actually *does* this behavior.

  2. I do not know that information sent on this stream will remain
     private. The implementor of the IOstream interface could very
     well broadcast it to the world.

  3. I do not even know that a call to IOstream_read() will be
     returned.

Plugability means that you can experiment. Plugability done wrong means
that I can experiment *on you*. So can the government. So can your
co-worker. So can other attackers. Sometimes a user will wish to
participate in these experiments. I simply think that it should be the
user (or sometimes the administrator) who makes this choice, and not the
system architect.

The current state of the art gives us only three mechanisms for dealing
with this:

  1. Verification: perhaps we can verify some of the properties of
     the implementation. We commonly do this in Java as a check of
     memory safety, but doing this more broadly is well beyond what
     we know how to do for general programs. My lab is working on this,
     but it's not really relevant for Hurd today, so I won't talk
     any more about it.

  2. Trust: we *declare* that we have reason to trust the implementor
     to do the code right, and we elect to *rely on* this declaration.

     Still, there are some properties that we might not trust. For
     example, I might decide that I will rely on your file system
     implementation, but that I will surround it in a confinement
     boundary so that it cannot disclose my files. Even if I think
     that you are a good guy, there may be an error in your program,
     and confining the program doesn't cost anything.

  3. Risk: we recognize that we have no reasonable basis for trust,
     and we decide to use something anyway. The key to this is to
     arrive at a system architecture where risk is survivable.

So: trust is not "all or nothing", but it requires care.

The main problem with broadly trusting servers and allowing them to be
pluggable is that most developers are not knowledgeable about security,
robustness, or simple good coding practices. Even if the developer has
good intentions, the attacker can exploit these vulnerabilities. And the
attackers are now paid better than we are (it's off topic, but if
anybody cares, I will explain in a separate note -- ask). ActiveX is a
wonderful example of what happens when plugability is done
irresponsibly.

So: plugability is good, and necessary, but there are places where it is
a very bad idea, and the proc server is a good example of where it is
bad. In general, pluggability must not be opaque: if you change a
contract that I rely on, I need to be able to detect this.

WHEN IS PLUGABILITY USEFUL

There are many cases during system development where we rely on plugable
component interfaces. These are a special case, because they occur in an
environment of zero vulnerability. We want this, and I will not discuss
this further.

There are other cases where plugability is desired, but not useful. The
responsibility of the architect in this situation is to say "no" to
plugability.

Plugability is useful when:

  1. There is more than one effective way to do something, or the
     most effective way depends heavily on the application (e.g.
     regular file layout vs. stored video file layout).

     Caveat: open plugability is justified here only when the risk
     it tolerable and/or a large number of distinct implementations
     is required.

  2. The consequences of failure are manageable. For example, the
     risk of introducing a new audio CODEC is acceptable because the
     worst that happens is you kill the player. The only thing the CODEC
     can do to you (if the system is properly designed) is make the
     player stop working. The user will quickly learn not to use this
     player.

WHEN IS PLUGABILITY SAFE?

Plugability should always be evaluated in the context of some set of
security and robustness objectives. Plugability is safe when these
objectives are met with sufficient assurance. The term "sufficient" is
necessarily dependent on user context, but here are some examples:

+ Plugability is safe when we can *verify* that a program satisfies
  our constraints [I include this only for completeness.]

+ Plugability is safe when we can *externally enforce* our
  constraints.

  Example: it may be okay that the video CODEC fails, as long as
  it does not disclose what it was decoding.

+ Plugability is safe when we can *recover* from failures at
  acceptable cost.

  Back to the CODEC: in my opinion, killing and restarting the
  music player would be an acceptable form of recovery. To my
  local radio station, it probably isn't a good solution.

The last case is pragmatically important. It explains why a Linux system
can be run safely inside a Xen domain: we can always kill the domain.
This is a special case of a more general feature: hierarchical recovery
domains. We might similarly run l4-hurd inside a Xen domain, and run
L4Linux inside the l4-hurd. Hierarchies seem to impose a discipline on
system structure that facilitates recovery.


As system builders, we often fall into the trap of thinking that systems
things should be plugable (e.g. file systems). The user mostly doesn't
care about that at all. The level where *they* want plugability is in
the area of application-visible function. Fortunately, this is exactly
the place where risk is manageable and tolerable (and we have
demonstrated how to do it in EROS with high performance).


shap





reply via email to

[Prev in Thread] Current Thread [Next in Thread]