l4-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: The Perils of Pluggability (was: capability authentication)


From: Bas Wijnen
Subject: Re: The Perils of Pluggability (was: capability authentication)
Date: Mon, 10 Oct 2005 11:08:39 +0200
User-agent: Mutt/1.5.11

Hello,

On Sun, Oct 09, 2005 at 01:23:29PM -0400, Jonathan S. Shapiro wrote:
> On Sun, 2005-10-09 at 10:14 +0200, ness wrote:
> > I guess one of the design goals of the Hurd is to NOT depend on the
> > implementation of a server. As far as I know, we don't want to ask "is
> > the implementation of this server trustible?" but ask "is the source
> > where I got this cap trustible?". We want to allow the user to replace
> > system components. To e.g. run a new task that uses a different proc
> > server. So the user says that to it's shell and the shell gives the
> > right cap to the newly created task. But marcus identified sth. like
> > your "identify" operation as necessary, AFAIK.

I don't really see the need for identify.  The current (AFAIK) idea we have
for the Hurd is to trust the creator of the process.  The creator will give
you your initial capabilities, and those are trustable be definition.  For
that reason, the creator of setuid processes is the filesystem, not the user
executing them.

This trust may give processes more rights than an external observer thinks
they should have.  But if that is the case, then their parent process made it
possible (and could have prevented it).  And since the parent process can do
all the malicious things itself if it wants, I don't see why there's any
security problem here.

Ok, I just remembered that there may be a need to check if a capability
implements a certain interface at a certain server.  If the provider of the
capability claims that it does, and the server is trusted, but the provider
isn't, we will usually want to check the provider's statement before using the
capability, to prevent doing things we don't like.

> Yes. I had some of this discussion with Neal and Marcus at LSM. The
> problem with this design goal is that it is unachievable in a robust or
> secure system. I understand that it is not a goal for Hurd to be secure
> in the sense that Coyotos tries to be, but if you achieve your goal
> above you will manage to be *insecure* in the way that Windows is. We do
> not need another Windows in the world.

I don't see where security is lost if you trust the creator of your process.
You must in the end get your trusted things from somewhere, and I can't think
of a more appropriate place.  And since you don't have more rights than your
parent (more likely you have less), there's nothing you can do that your
parent couldn't do already.

> I would like to propose for consideration a new social objective for
> Hurd: Hurd should be a system that my mother (or your mother) can use
> without fear. When something goes wrong, or something is compromised,
> the damage should be well contained. The user's naive assumptions about
> what is safe should be mostly right. I want to propose that this social
> goal should be more important than arbitrary and unmotivated
> flexibility.

I agree with that, but I don't see why that would be impossible with our
approach.

> Suppose I hold an IOstream capability. What do I actually hold? I hold a
> capability naming some body of code that *alleges* to properly implement
> the IOstream interface. Without more information, here are some things
> that I do not know:
> 
>   1. I do not know that it *actually* does these operations. The
>      interface is an approximate statement of expected behavior,
>      but it is *very* approximate, and even if the desired
>      behavior were fully captured there is no way to check that
>      the program behind the interface actually *does* this behavior.

There is no way to check this, unless we have access to the code (and even
then it's very hard).  I think it's a Good Thing(tm) that the client cannot
access the code of a server.  In most cases, the server is trusted by the user
(not the process) and client may not be.  So it's the client which needs to be
looked after.

>   2. I do not know that information sent on this stream will remain
>      private. The implementor of the IOstream interface could very
>      well broadcast it to the world.

In case of an external untrusted server, this is neccesarily the case.  I see
no other way.  However, if we put the stuff in a library, we can discard all
our capabilities when using it (possibly forking off our own server for this
purpose).  When the process has no capabilities (except for communication with
the parent), it cannot broadcast anything.  It can only stop working, which is
acceptable (if it isn't you shouldn't be using untrusted code there at all).

>   3. I do not even know that a call to IOstream_read() will be
>      returned.

As I said, this is acceptable.  It can be handled with a timeout (although
that is fragile and may fail under heavy load), or by killing the whole
process (including parent).  The latter may not be acceptable in some cases,
but in those cases I think untrusted code shouldn't be used at all.

> Plugability means that you can experiment. Plugability done wrong means
> that I can experiment *on you*. So can the government. So can your
> co-worker. So can other attackers. Sometimes a user will wish to
> participate in these experiments. I simply think that it should be the
> user (or sometimes the administrator) who makes this choice, and not the
> system architect.

I agree.  I do not see why our approach does it wrong in this respect.

> The current state of the art gives us only three mechanisms for dealing
> with this:
> 
>   1. Verification: perhaps we can verify some of the properties of
>      the implementation. We commonly do this in Java as a check of
>      memory safety, but doing this more broadly is well beyond what
>      we know how to do for general programs. My lab is working on this,
>      but it's not really relevant for Hurd today, so I won't talk
>      any more about it.

This sounds interesting and unachievable to me in most cases. :-)

>   2. Trust: we *declare* that we have reason to trust the implementor
>      to do the code right, and we elect to *rely on* this declaration.
> 
>      Still, there are some properties that we might not trust. For
>      example, I might decide that I will rely on your file system
>      implementation, but that I will surround it in a confinement
>      boundary so that it cannot disclose my files. Even if I think
>      that you are a good guy, there may be an error in your program,
>      and confining the program doesn't cost anything.

Right, this is what we do in the Hurd.  We are told by our parent that we can
trust certain capabilities.  So we do.  If we later get capabilities that we
don't (fully) trust, we may isolate the use of them as I described above.

>   3. Risk: we recognize that we have no reasonable basis for trust,
>      and we decide to use something anyway. The key to this is to
>      arrive at a system architecture where risk is survivable.

This is up to the user, not the process IMO.  If I want to run xmms with a new
plugin, then xmms will have to trust it because I tell it to.  But of course
it should still isolate it as much as possible.  Nevertheless, the plugin can
not do what it should.  However, if xmms is well written, it still allows me
to switch plugins.  And if it was an appearance plugin, the output should
still be generated (that is, it shouldn't stop playing).

I think this is not what you mean.  It is the user which takes the risk, not
the process.  IMO no process should ever decide it can take a risk.  Only
users (and sysadmins) should.  And when a process is told it can take the
risk, then it is a (partly) trusted partner.

> So: trust is not "all or nothing", but it requires care.

Of course.

> The main problem with broadly trusting servers and allowing them to be
> pluggable is that most developers are not knowledgeable about security,
> robustness, or simple good coding practices.

While that is of course a problem, I don't think the system design can do
anything about it.  Good programs can make sure bad plugins can't do too much
damage.  The core of the system should be consisting solely of good programs.
But if programs you trust as a user are badly written, there's no way the
system can still be safe.  Maximum user trust simply means the program should
not be restricted in any way (well, not more than the login shells of that
user anyway).

> Even if the developer has
> good intentions, the attacker can exploit these vulnerabilities. And the
> attackers are now paid better than we are (it's off topic, but if
> anybody cares, I will explain in a separate note -- ask). ActiveX is a
> wonderful example of what happens when plugability is done
> irresponsibly.
> 
> So: plugability is good, and necessary, but there are places where it is
> a very bad idea, and the proc server is a good example of where it is
> bad.

You mean using a different proc server if we feel like it?  I don't see why
this is a problem.  setuid processes will not accept your capability to the
new proc server, because you are not their parent.  Your children can do all
kinds of weird things, but nothing weirder than you could do yourself anyway.

Plugging in a different proc server is a useful option for the user (together
with the other system servers, it provides a jail).  If the user wants to
start a process with a new proc server, then she'll have a reason for that.
I'm not ever going let my program tell that what the user wants is wrong,
because by definition it isn't.  (I may tell that the way they're asking it is
wrong, but that a parsing problem, which has nothing to do with it.)

What security problem could arise from this?  The user could run trusted code
with the wrong proc server?  That shouldn't be happening by accident.  And
since I do want to allow the user to run code with it, I leave it to the user
if it's a good idea.  After all, running setuid applications is going to fail
(unless the filesystem is using your proc server, which probably means it's
started by the user as well).

> In general, pluggability must not be opaque: if you change a
> contract that I rely on, I need to be able to detect this.

You mean if the implementation of the interface changes?  I do not see the
difference between having an interface which was defined from the start as
"I'll do A, but after 15 minutes I'll be doing B" and not changing it, and
"I'll do A", and after 15 minutes the implementation is changed into "I'll do
B".  I can understand that it matters for verification, but I'm assuming here
that that's not possible.

> WHEN IS PLUGABILITY USEFUL
> 
> There are other cases where plugability is desired, but not useful. The
> responsibility of the architect in this situation is to say "no" to
> plugability.
> 
> Plugability is useful when:
> 
>   1. There is more than one effective way to do something, or the
>      most effective way depends heavily on the application (e.g.
>      regular file layout vs. stored video file layout).
> 
>      Caveat: open plugability is justified here only when the risk
>      it tolerable and/or a large number of distinct implementations
>      is required.

The risk can usually be minimised by some extra protection.  This cannot be
done if you're not plugging libraries but sockets to servers (such as
filesystems).  However, they have your data anyway, so they could be doing
things with it.  If you (as a user) don't like that, then you shouldn't have
started that filesystem in the first place.  Or at least limited its rights to
do things.  For example it makes sense that most filesystems should not have a
capability to access the network.

>   2. The consequences of failure are manageable. For example, the
>      risk of introducing a new audio CODEC is acceptable because the
>      worst that happens is you kill the player.

Even that is not possible if the player doesn't trust the plugin, see my xmms
example above.

>      The only thing the CODEC can do to you (if the system is properly
>      designed) is make the player stop working.

Or at least not perform the function the plugin is supposed to perform.  All
other functions could still keep working on a well designed player.

>      The user will quickly learn not to use this player.

Right.

> WHEN IS PLUGABILITY SAFE?
> 
> Plugability should always be evaluated in the context of some set of
> security and robustness objectives. Plugability is safe when these
> objectives are met with sufficient assurance. The term "sufficient" is
> necessarily dependent on user context, but here are some examples:
> 
> + Plugability is safe when we can *verify* that a program satisfies
>   our constraints [I include this only for completeness.]
> 
> + Plugability is safe when we can *externally enforce* our
>   constraints.
> 
>   Example: it may be okay that the video CODEC fails, as long as
>   it does not disclose what it was decoding.
> 
> + Plugability is safe when we can *recover* from failures at
>   acceptable cost.
> 
>   Back to the CODEC: in my opinion, killing and restarting the
>   music player would be an acceptable form of recovery. To my
>   local radio station, it probably isn't a good solution.

I'm not sure if this list if exhaustive, but at least I agree that in all
these cases it is safe.

> The last case is pragmatically important. It explains why a Linux system
> can be run safely inside a Xen domain: we can always kill the domain.

This is only true if it didn't send your private data over the network.  If it
did that, recovering includes removing it from every place where it ended up,
which is usually impossible (even if your name is RIAA).

> As system builders, we often fall into the trap of thinking that systems
> things should be plugable (e.g. file systems). The user mostly doesn't
> care about that at all.

I (as a user) do care.  Pluggable file systems is what makes translators
possible, and it is what makes the Hurd powerful for users and attractive for
developers.

> The level where *they* want plugability is in
> the area of application-visible function. Fortunately, this is exactly
> the place where risk is manageable and tolerable (and we have
> demonstrated how to do it in EROS with high performance).

I want pluggability there as well. :-)  I still believe it should be possible
to have pluggability on all desired levels without compromising security.  If
I understand you correctly, you don't think so.  I'm very interested to hear
some why my reasoning would be incorrect.

Thanks,
Bas

-- 
I encourage people to send encrypted e-mail (see http://www.gnupg.org).
If you have problems reading my e-mail, use a better reader.
Please send the central message of e-mails as plain text
   in the message body, not as HTML and definitely not as MS Word.
Please do not use the MS Word format for attachments either.
For more information, see http://129.125.47.90/e-mail.html

Attachment: signature.asc
Description: Digital signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]