bug-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Security models (was: A niche for the Hurd - next step: reality check)


From: olafBuddenhagen
Subject: Security models (was: A niche for the Hurd - next step: reality check)
Date: Wed, 3 Dec 2008 13:57:12 +0100
User-agent: Mutt/1.5.18 (2008-05-17)

Hi,

On Wed, Nov 26, 2008 at 10:56:10PM +0100, Michal Suchanek wrote:
> 2008/11/25  <olafBuddenhagen@gmx.net>:

> > The situation is really quite simple: A system designed to support
> > use cases like DRM is unquestionably bad from a GNU viewpoint -- not
> > only because it helps DRM specifically, but because the whole
> > concept of a program hiding something from the user is fundamentally
> > against GNU philosophy.
> >
> > You will never see anything like that in the Hurd, end of
> > discussion.
> 
> Unfortunately this is not so easy to discard.

Oh yes it is :-)

This was discussed in all detail back than. Maybe you don't remember the
discussion well, or didn't follow it closely enough, or are in plain
denial -- but it's all there.

I'm willing to assume it's the first, and thus will give some hints as a
reminder.

> The protection that is provided between different processes or
> different users can also work for DRM processes. Here a process would
> request that part of the memory which was granted to it by the user
> (user shell or another process that executed the DRM process) be no
> longer accessible by any other process - hence any process the user
> might execute.

This is a gross oversimplification.

In EROS/Coyotos, a process is indeed isolated from all other processes,
including the one that launched it. The constructor allows creating a
proccess with access to resources the invoking process has no access to;
and the space bank allows the process to hand out memory, without having
access to the actual content of this memory.

This complete isolation is the base of a certain security model -- the
one advocated by Shapiro. But that's not the only possible model.

We believe that a process never exists on its own right, but always for
the sake of the process that launched it -- which we call the parent
process. As it exists solely for the parent's sake, it lives entirely
under its control. The parent process creates the child, and has full
control over all its resources.

This way we get a hierarchical security model. Processes are protected
from other processes, but not from their parents. The user session is an
ancestor of all processes created by the user, and thus the user has
full control over all his processes.

When a process needs the service of another process which deals with
resources it has no access to itself -- say a powerbox -- it doesn't
launch that process itself. Instead, it invokes the service from a
process launched by another party. This way it has no access to the
resources of that other process -- but the user who launched that other
process does have control over it.

In this model, the only way a user can interact with a process it has no
complete control over, is by invoking a service provided by another
user: created by the other user, and running on this other user's
resources. There is no mechanism that allows a process launched by the
user to hide something from him; instead, hiding something from the user
requires explicit action by a third party. To make a DRM service
available on a particular machine, the administrator would have to set
up a user for running this service.

In other words, treachery against the user is only possible with
cooperation of a third party (the admin) -- but this is unlikely, as the
interests of the admin are usually closer to the interests of the user
than to the interests of Disney... Effective DRM is not impossible
technically, but the respective positions of the stakeholders make it
rather unpractical.

We believe that this design allows implementing all desirable use cases,
without enabling the undesirable ones. This is because control is in the
right hands.

> Of course, the extension might not be implemented or the process might
> not have permission to use it but then the process might refuse to run
> in that case.

In our model, a process has no means to refuse running. We have complete
control over it, and we can make it believe whatever we want it to
believe.

> For the DRM to be effective it must be possible to verify that the
> system that implements the protection is known to really fulfill the
> promise. Hardware cryptographic devices are provided for that, and are
> put an many (most?) current mainboards.

Indeed, this is the real threat: We can't fool the server. If remote
attestation becomes commonplace, Disney will be able to deny access by
our non-treacherous system alltogether.

That's why we need to fight the TPM stuff teeth an claw.

-antrik-




reply via email to

[Prev in Thread] Current Thread [Next in Thread]