l4-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Challenge: Confinement


From: Marcus Brinkmann
Subject: Re: Challenge: Confinement
Date: Tue, 15 Aug 2006 21:17:34 +0200
User-agent: Wanderlust/2.14.0 (Africa) SEMI/1.14.6 (Maruoka) FLIM/1.14.7 (Sanjō) APEL/10.6 Emacs/21.4 (i486-pc-linux-gnu) MULE/5.0 (SAKAKI)

Hi Christian,

thank you for your offer and inquiry.  I want to reply in full at a
later time.  I do not have time at this point to go into the details,
but the discussion is impossible without going into some depth.

I have an opinion on some issues your research raises, but in fact, I
have even more questions than opinions.  So, below, you will find an
awful lot of questions.  I am interested in a reply to even only some
of them.  All of these questions are genuine: In formulating them, I
have temporarily suspended my educated scepticism of anything "trusted
computing", and just asked the most obvious and natural questions.

In some sense, I feel pretty bad about asking so many questions and
offering so little substance myself.  However, I have considered
replying in detail and found that I do not even know what your
position is on many issues, and what the justifications are for
"hidden assumptions" that are not clearly referenced.  Thus, I feel
that starting with questions is more appropriate than replying to my
possible misconceptions about your work.

I assume you are familiar with
http://lists.gnu.org/archive/html/l4-hurd/2006-05/msg00184.html
http://lists.gnu.org/archive/html/l4-hurd/2006-05/msg00324.html

At Mon, 14 Aug 2006 17:47:34 +0200,
Christian Stüble <address@hidden> wrote:
> General: Since I am not aware of a multi-server system designs that fulfills
> today's requirements, our group has to design and implement a lot of services 
> from scratch - wasting a lot of time, since our main focus is security. 
> Therefore, we would like to collaborate with further projects like hurd and 
> coyotos, to share design ideas, use cases and implementations. Unfortunately, 
> this seems to be impossible due to conflicting requirements (at least with 
> hurd): We are using TC technology and we are even developing DRM-like 
> applications (whatever this means).

It is only impossible if the aspects of "trusted computing" that I
find inacceptable are inseparable from the rest of the system
architecture.  However, _if_ they are inseparable, then that, IMO,
points at a defect of the system architecture, because the user in
such an architecture perpetually alienates his rights to major part of
his computing infrastructure (as explained in the emails referenced
above).

So, the question to you is: Can you clearly separate the aspects of
your system design that require "trusted computing" from those aspects
of your system design that don't?

Examples where this may be a problem for you are: Window management
(does the user have exclusive control over the window content?),
device drivers, debugging, virtualization, etc.

> We do this for the following reasons: On 
> the one hand, it is IMO better to prove that a better solutions exists if you 
> want to criticise existing technology.

What is your metric for "better solution"?  To make clear what I mean
here, consider the following analogy: Do I have to provide a better
solution to mass extinction when criticizing nuclear weapons?

Another way to formulate this question is: What is the perceived
problem you are trying to solve?

> On the other hand, TC is currently 
> the only technology that is widely available and fulfills (IMO) important 
> security requirements.

What are these security requirements, and which party has these
requirements that can not be met by any other existing technology,
like plain old cryptography etc?

> Yes, it could be misused (like nearly any security-related product),

Do you expect it to be misused?  In which form and to what extent?  If
you expect it to be misused, why do you think that the benefits will
outweigh the costs?

> but our main develop/reasearch goal is an 
> architecture that prevents misuse but allows many relevant use cases.

Can you please give references to existing results in this area?
I am aware of the following:

 Ahmad-Reza Sadeghi, Christian Stüble: Property-based Attestation for
 Computing Platforms: Caring about policies, not mechanisms; New
 Security Paradigm Workshop (NSPW), 2004.

Is there more?  The above paper suggests that mechanisms are verified
by a "trusted third party" to fulfill certain properties.  It is
suggested that the third party will be a government agency.  Some of
the questions that immediately arise are not answered: Who will pay
for the service by this third party (and how)?  Why would the industry
give up control over the mechanisms if they have a chance to get total
control with less effort?  Why would the public subject itself to
these substantial restrictions, if they have a chance to get
unencumbered access with less effort?  There is more to this topic,
but I don't want to overdo in this first quick reply.

> The 
> same holds for the DRM-like applications: We develop applications that allow 
> the enforcement of security policies in a distributed environment, but which 
> consider user rights and the law (keywords: multilateral security, fair use).

My questions: What other user rights do you consider beside fair use?
How do you express complete fair use guarantees (properties) in a
property-based security infrastructure?  How do you express other
guarantees and rights in a property-based security infrastructure?

> Challenge: I would like to give a more concrete example of an application that
> IMO requires confinement (e.g., based on the security properties offered by 
> TC 
> technology): As you may know, we have in Germany strict laws regarding user 
> privacy. E.g., a company is in general not allowed to give personal 
> information to other institutions. Nevertheless, it is sometimes hard to 
> prove that there was a leakage of information, or companies may be in
> another country. Therefore, one of our goals is to develop an environment
> that allows users to create an agent that controls their personal information 
> and enforces, e.g.,  within the environment of a company, that it can only 
> use personal information once, or that it cannot be shared with other 
> companies, etc. But this requires that the owner of the platform executing 
> the agent cannot access the internal state of the agent. A lot of people 
> would call the agent a DRM application...

Can you give more details about use cases?  What specific information
do you think will people micromanage in this way?  Given that the
public today is offering its most intimate data for the asking (for
example, via "payback/discount cards"), what demand do you see for
this technology on the personal side?  Given that there is a huge
monetary benefit from processing user data, what demand do you see to
increase liability and infrastructure cost on the industry side?

> Another application, currently an (open) master thesis, is to develop a P2P 
> filesharing client that uses DAA to connect to other clients. The motivation 
> is to prevent modified clients that allow the platform owner to see the 
> connection table (and thus to uncover the anonymity of clients). But this 
> only makes sense if the platform owner cannot access the internal state of 
> applications... 

Assuming that this technology succeeds, how do you think industry and
governments will react?  What is the incentive of the platform owner
to waiver control over the internal state of the application?  How
strong is the demand to hide connection data among legal P2P
filesharing users?

That's it for now.  It's not exhaustive, but a reasonable start, I think.

Thanks,
Marcus





reply via email to

[Prev in Thread] Current Thread [Next in Thread]