l4-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Separate trusted computing designs


From: Marcus Brinkmann
Subject: Re: Separate trusted computing designs
Date: Thu, 17 Aug 2006 09:18:40 +0200 (CEST)

Christian Stüble <address@hidden> schrieb:
> Hi Marcus, hi all,

> thanks for the questions. As I said in my last message, I would
> prefer to
> discuss only OS-related questions in this list..

I think it's a mistake for a practical project to ignore the implications and
connections to real world issues.  Western intellectual culture suffers
dramatically from a too strong focus on domain expertise, and too little
cross-thinking.

> To prevent misunderstandings: I don't want to promote TC, nor do I
> like its
> technical instantiation completely. IMO there are a lot of technical
> and
> social issues to be corrected; that's the reason why I am working on
> this
> topic. Nevertheless, a lot of intelligent researchers have worked on
> it, and
> therefore it makes IMO sense to analyse what can be done with this
> technology. In fact, who else should do this?

If the technology is fundamentally flawed, then the correct answer is
"nobody", and instead it should be rejected outright.  History shows that
intelligence and morality are completely independent properties.

Of course, it makes sense to analyse what can be done with the technology even
if one rejects it.  However, one would come to very different conclusions.  In
this sense, "what can be done" translates for me to "what are the combined
effects of its application on society" rather than "what is the best we can
make out of it".

> You are asking a lot of questions that I cannot answer, because they
> are
> the well known "open issues". The challenge is to be able to answer
> them
> sometimes...

If they are open issues, where does the confidence come from your research
group that they not only can be solved, but in fact that they are solved in
your design?  From the EMSCB home page (under "Benefits"):

"This inherent conflict between the interests and security requirements of
end-users (protection of privacy and self-determination) and those of content
and application providers can be solved by a multilateral trustworthy
computing platform that guarantees a balance among interests of all involved
parties."

There is no qualifier in that sentence.  Maybe I understand you better if you
point out which questions I asked belong into the category "open problem" and
which do not.

> A last note: You asked for use cases that may require security
> properties as
> provided by TC, but that could be of interest for users of hurd. In
> fact,
> these are more or less use cases I would be interesed in.

I asked for use cases that have a clear benefit for the public as a whole or
the free software community.

> If there
> are two
> comparable open operating systems - one providing these features and
> one that
> does not, I would select the one that does. I do not want to discuss
> the
> opinion of the government or the industry. And I don't want to
> discuss
> whether people are intelligent enough to use privacy-protecting
> features or
> not. If other people do not want to use them, they don't have to. My
> requirement is that they have the chance to decide (explicitly or by
> defining, or using a predefined, privacy policy enforced by the
> system).

I am always impressed how easily some fall to the fallacy that the use of this
technology is voluntarily for the people.  It is not.  First, the use of the
technology will be required to access the content.  And people will need to
access the content, to be able to participate in our culture and society.  All
the major cultural distribution channels are completely owned by the big
industry, exactly because this allows these industries to have a grip-hold
over our culture.  There is an option for popular struggle against this, but
it will require a huge effort, and success is by no means guaranteed.

In the end, this technology, if it succeeds, will be pushed down people's
throat.  Everybody seems to know and admit this except the "intelligent
researchers" (well, and the marketing departments of the big corporations).

Even the publications of the "trusted computing" group admits this quite
explicitely.  The "Best Practices and Principles" document says a lot about
how bad it is to use this technology to coerce people into use of the
technology, but then frankly admits that "preventing potentially coercive and
anticompetitive behavior is outside the scope of TCG", and earlier that "there
are inherent limitations that a specification setting organization has with
respect to enforcement".

In this light, the strong emphasis on "opt-in" and "voluntary use" that is put
forward by proponents of the technology is little else but a pro-active,
blame-shifting defense.  The people will end up not only with the damage, but
also with the guilt for letting it happen.

Below (and in the emails I linked to) I explain why I consider "trusted
computing" a security threat.  For active security threats, passive defense is
not enough.  This should be a concept very familiar for you, given your
research background.  For example, we do not consider it sufficient that a
user has the option not to click on the "open attachment" button and therefore
prevent installation of a virus.  We also do not consider it sufficient for
the user to be able not to open a word document and execute the contained
macros.  In the same sense, I do not consider it sufficient for a user to be
able to prevent, by inaction, the installation and use of "trusted computing"
technology.  Active security threats require an active defense mechanism.

> > I assume you are familiar with
> > http://lists.gnu.org/archive/html/l4-hurd/2006-05/msg00184.html
> > http://lists.gnu.org/archive/html/l4-hurd/2006-05/msg00324.html
> Not fully. I read it quickly yesterday evening, but I have to find
> more time
> to read it more deeply. Sorry if I use other terms for now.

I still have the plans to publish a revised version as a white paper, but it
is unclear when I will get to it, so don't delay reading the emails waiting
for that.

> > So, the question to you is: Can you clearly separate the aspects of
> > your system design that require "trusted computing" from those
> > aspects
> > of your system design that don't?
> From a high-level view, definitively yes. The main concept we are
> using
> TC for is to enable what we call a "trusted channel": Secure channels
> between
> (remote) compartments that allow the involved parties (sender,
> receiver)
> to get information about the 'properties' of the communication
> partner. A
> property could be the information whether the user can access the
> state of
> the process or not. But it could also be a list of hash values (e.g.,
> IMA).

> In our design, we try to abstract away the functionality offered by,
> e.g., a
> TPM and to use more generic concepts. Example: A service providing
> persistent
> storage for applications provides different kinds of "secure
> storage". Bound
> to a user, bound to the TCB, bound to the application behavior
> (including the
> TCB), whatever. If some properties are missing (in our design this
> will
> depend on a user-defined policy, in your design maybe a compiling
> flag), then
> applications cannot use them (and applications that require them will
> not
> work).

> We have not yet finished deriving low-level requirements from our
> high-level
> ones, but maybe the difference between "your" design and "our" design
> is
> only a configuration option of the microkernel, or a command line
> option, or
> only an entry of the system-wide security policy. Would that be
> acceptable?

What is acceptable or not in my view does not depend on technicalities, but on
functionality.  My view is that "trusted computing" technology is a security
threat: It takes away ownership over the hardware from the user and puts it
into the hand of the "trusted computing" manufacturer and the application
providers.  The user loses, piece by piece, control over the computer.  This
is comparable to invasion by spyware and other malware.  Note that in those
cases the user also often authorizes the installation explicitely (the user is
not aware about what he authorizes, but the same will be true for the vast
majority of users in "trusted computing" systems, until it is too late).

> > Examples where this may be a problem for you are: Window management
> > (does the user have exclusive control over the window content?),
> > device drivers, debugging, virtualization, etc.
> This is (except of the elementary security properties provided by the
> underlying virtualization layer, e.g., a microkernel) an
> implementation
> detail of the appropriate service. There may be implementations
> enforcing
> strong isolation between compartments and others that do not. That't
> basic
> idea behind our high-level design how to provide multilateral
> security: The
> system enforces the user-defined security policy with one exception:
> Applications can decide themselves whether they want to continue
> execution
> based on the (integer) information they get (e.g., whether the GUI
> enforces
> isolation or not). But this requires that users cannot access the
> applications's internal state.

That's incompatible with my ideas on user freedom and protection the user from
the malicious influences of applications.  It is also incompatible with the
free software principles.

Thanks,
Marcus




reply via email to

[Prev in Thread] Current Thread [Next in Thread]