l4-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Separate trusted computing designs


From: Christian Stüble
Subject: Re: Separate trusted computing designs
Date: Thu, 31 Aug 2006 20:06:46 +0200
User-agent: KMail/1.9.1

> To many questions I asked you responded that you don't want to discuss
> them or at least not on this forum.  That's fine, but it leaves us
> with very little left to discuss.
Yes, and I am sorry about that. But unfortunately I have not the time to 
discuss about the web pages of some projects or research groups, and
about ideologies.

You started a very simple challenge: Give my meaningful use cases for TC.
When I gave you an example, the problems of this technology were clear to me 
and it is therefore imo not neccessary to discuss them here. 

The question was, whether this use case is meaningful for hurd-users or not.
If it is, we can discuss whether it can be implemented without TC, or by using 
a better (existing) technology. If it is useful but hurd rejects supporting 
it because the required security property also allows other use cases that
hurd wants to prevent by design, we can check whether the required differences 
can be abstracted away to be able to work together on other components.

When a problem occurs that requires a TTP, problems can often be solved more 
efficient if one party accepts to lose some predefined control over its 
platform. The root of this local TTP is the TPM that can ensure that both 
parties play fair (alternatively, the parties can verify fair play). The 
party that loses some control does not have to be the end-user, but I for 
myself prefer storing a movie on my own disk instead of a remote (trusted) 
server. 

What I do is to figure out how systems can be designed using this idea, what 
can be done by design and what not. The TPM is only a temporary tool that
makes some assumptions more realistic. I am also interested in the concrete
instantiation TPM, but this is a completely different topic. In the future, 
there may be better solutions than the TPM. Maybe our work helps finding such 
a solution, maybe not. But please understand that I do not have the time to
discuss again all aspects concerning trusted computing, DRM, and whether "full 
ownership" is neccessary or not. IMO the conflicting interests between users,
patform owners, and content providers have to be solved by a compromise that 
is acceptable by every involved party.

I do not promise that better solutions will be used in the future, and maybe 
all the horror scenarios you describe will happen on top of commercial 
applications. Who knows? I am using Linux-only (except PERSEUS :-) systems 
for about 12 years now, and I am sure I can continue to do so. But IMO an
open-source alternative that does not fulfill user requirements cannot 
substitute commercial solutions.


> As the projects we are talking about are publically funded research
> projects from a university, I would expect that any extraordinary
> claims on your side have at least a referencable justification, and
> that challenging the fundamental assumptions is not unheard of, but in
> fact expected and welcomed, even sought for.
Of course. But please not during this discussion.

> > > If you can enforce a property about a system, then it is not owned
> > > exclusively by another party.  That's a contradiction in terms.
> >
> > Maybe this depends on your definition of "owned" or that you have not
> > read my text carefully. I said that I want to enforce "my rules", not "my
> > properties". If the platform has certain properties that I accept (and
> > that imply that the platform owner cannot bypass security mechanisms
> > enforced by my agent), my agent will execute. Else it will not.
>
> My definition of ownership is quite narrow: It means exclusive right
> to possess, use and destroy something.  In the proposed technical
> measures to implement such mechanisms as you describe, the other party
> is not in complete possession and control over the computer: Rather,
> there is a chip in the computer which has content that they can not
> read out.  Thus, this chip is not part of their property.  
You are not consistent here. What is the difference between "possess" 
and "own"? A TPM owner is free to "use" keys stored within the key,
and s/he can also destroy all keys (at least since version 1.2). Later,
you also want to "read out" the key. This is more than use and destroy.

> How much 
> this affects the rest of the computer depends on the question if and
> how the chip is used or not.
>
> > > What you can do is to engage in a contract with somebody else, where
> > > this other party will, for the purpose of the contract (ie, the
> > > implementation of a common will), alienate his ownership of the
> > > machine so that it can be used for the duration and purpose of the
> > > contract.  The contract may have provisions that guarantee your
> > > privacy for the use of it.
> > >
> > > But, the crucial issue is that for the duration the contract is
> > > engaged under such terms, the other party will *not* be the owner of
> > > the machine.
> >
> > Is the owner of a house not the owner, because s/he is not allowed to
> > open the electric meter? If you sign the contract with the power
> > supplier, you accept not to open it. And it becomes a part of your house.
> > Now, you are not the house owner any more? Sorry, but I do not understand
> > why a platform owner alienates his ownership by accepting a contract not
> > to access the internal state of an application.
>
> You are definitely not the owner of your house, at least not in
> Germany.  You do not even need to have a power meter: In Germany,
> there exists laws that regulate how you can modify your house, what
> type of extensions you are allowed to build, etc, and you need permits
> to do so.  This is because there is an acknowledged public interest in
> the safety and appearance of your structural modifications.  And this
> is only one small example, there are many other reasons why your house
> is not owned by you.
Okey, this statement shows again that you assume a very uncommon
definition of "ownership". You have a knife, but you are not allowed to
kill people with it. Now, you are not the owner of the knife any more?

This is why I pointed out several times that we have to distinguish between
forbidding a technique and forbidding its misuse.

> You may have some (even extensive) property rights to your house, but
> you do not have the exclusive right to possess, use and destroy it.
> The other rights are contracted away to the public.  In fact, you were
> only allowed to build your house in the first place because the public
> allowed you to do so.
>
> Of course, with such a narrow definition of ownership, we do not own
> very much, as any of our property is subjectable to outside
> interference in the case of emergencies etc, even our own body.
Nice example. According to your definition, you are not the owner of your 
body, because you are not allowed to kill yourself.

> However, it is useful for the sake of discussion to simplify away some
> of the more remote public rights to make more clear where the major
> parts of control come from.  
But for now in my opinion your definition of ownership does not make sense at 
all, because it cannot be applied in practice. Why do you require full 
ownership of a PC then?

> But this is my point: The security 
> measures contained in "trusted computing" are overreaching.  Even our
> very dearest material property is not owned as strongly by us as is
> proposed for bits and bytes in a remote computer subjected to "trusted
> computing" policies.
>
> I like the power meter example, by the way.  The analogy to trusted
> computing is that the whole interior in your house is cut into many
> small slices and blocks, and that for each piece of furniture, each
> painting on the wall, each book on the shelf, and every recipe in the
> kitchen you have to negotiate a contract with a provider, and that
> contract contain things like that you have to pay a buck every time
> you look at the painting for longer than 10s, or everytime you open
> the cupboard and get a plate.  In other words: You will still own the
> bricks the house is made of, but everything inside it will be owned by
> somebody else.
I do not agree. Using trusted computing, you give away a predefined
amount of control in a clearly defined context, e.g. you will not access the
video adapter until the media player shows the movie. To continue the
meter example, you accept components where you only have limited control,
but you also have component that you can use freely. This obviously depends on 
the license. In practice, there are more concrete examples: You get your 
printer, tooth-brush, shaver, etc for free, but you have to pay whenever
you use it (buying ink, brushes, razor blade, ...) But in real life, you can 
decide whether you accept this business model when you sign the contract. If 
enough customers do not accept this business model, another vendor will 
provide other business models. This is real life. Why shouldn't this happen 
in virtual life, too? Why do you require full ownership within your PC?

> > > > > > If there
> > > > > > are two
> > > > > > comparable open operating systems - one providing these features
> > > > > > and one that
> > > > > > does not, I would select the one that does. I do not want to
> > > > > > discuss the
> > > > > > opinion of the government or the industry. And I don't want to
> > > > > > discuss
> > > > > > whether people are intelligent enough to use privacy-protecting
> > > > > > features or
> > > > > > not. If other people do not want to use them, they don't have to.
> > > > > > My requirement is that they have the chance to decide (explicitly
> > > > > > or by defining, or using a predefined, privacy policy enforced by
> > > > > > the system).
> > > > >
> > > > > I am always impressed how easily some fall to the fallacy that the
> > > > > use of this technology is voluntarily for the people.  It is not. 
> > > > > First, the use of the technology will be required to access the
> > > > > content.  And people will need to access the content, to be able to
> > > > > participate in our culture and society.  All the major cultural
> > > > > distribution channels are completely owned by the big industry,
> > > > > exactly because this allows these industries to have a grip-hold
> > > > > over our culture.  There is an option for popular struggle against
> > > > > this, but it will require a huge effort, and success is by no means
> > > > > guaranteed.
> > > >
> > > > I did not talk about TC in general, but about the "privacy-protecting
> > > > agent".
> > >
> > > I am not sure what you mean by that term.  The crucial point here is
> > > that TC removes the choice from the people which software to run.
> >
> > I never said that I think that the users will have the free choice to use
> > TC technology or not. Different circumstances may force him to use it,
> > e.g., his employer, or that s/he prefers an operating system that does
> > not allow to disable TC support.
> >
> > I suggested a use case that uses TC in a meaningful sense (at least in my
> > opinion), and as a response people are asking me whether users will be
> > able to use this technology. My statement was that I would like to have
> > such a system and that I am currently not interested in opinions of the
> > industry or the government, or whether other people need this feature.
>
> I have trouble following you here.  If nobody else uses this
> technology, how will _you_ be able to use it?  The technology only
> makes sense if more than one party takes part in it.
>
> This is by the way the reason that the "trusted computing" technology
> is inseparably tied to social issues and politics: It is a technology
> that affects the relationship of power between people.  It's very
> existance is in the domain of contracts.  Thus, every question related
> to "trusted computing" is a social question.  In comparison, the
> technical issues are pale in relevance.
>
> [...]
>
> > > The views of the FSF on DRM and TC are well-published, and easily
> > > available.  For example, search for "TiVo-ization".
> > >
> > > What is incompatible with the free software principles is exactly
> > > this: I am only free to run the software that I want to run, with my
> > > modifications, if the hardware obeys my command.  If the hardware puts
> > > somebody else's security policy over mine, I lose my freedom to run
> > > modified versions of the software.  This is why any attempt to enforce
> > > the security policy of the author or distributor of a free software
> > > work is in direct conflict with the rights given by the free software
> > > license to the recipient of the work.
> >
> > What does the view say about a user that freely accepts a policy? I
> > _never_ talked about a system that "puts somebody else's security policy
> > over mine".
>
> First, again, the user never "freely" accepts a security policy.  The
> only reason to accept a security policy is to get at the information
> subjected to it.  The acceptance of a security policy is always the
> means to a goal, not a goal in itself.
Before I start another discussion here. Do you already have a definition
what a free choice is? And please ensure that real life can provide such free 
choices.

Regards,
Chris

> Furthermore, any mode of distribution of free software that
> effectively restricts the way in which the software can be
> redistributed, modified or executed violates the free software
> principles.  It does not matter if the user accepts the policy or not:
> The author of the software does not accept the providers policy, and
> that's where the buck stops (assuming that the intent of the FSF as I
> understand it is implemented in the GPLv3 final version).
>
> As for how to define the distinctions between the various use cases:
> The GPL is not a technical document, but a legal document.  Thus, it
> will respond to the various real world challenges that are considered
> a threat to the free software asset of the FSF.  For example, a couple
> of years ago, web services were considered as a threat in a similar
> way that DRM is considered a threat now.  Insofar you may be right
> that from a GPL point of view examples such as a hosted service and
> DRM are not easily distinguishable.  In practice, however, people have
> not built such hosted systems and marketed them at any significant
> scale in a way that posed a threat to the free software principles.
> This is because usually there is no cryptographic coupling between the
> software and the data it processes in such hosted services.  If there
> were, we would probably see a reaction to that.  Well, with DRM there
> is such a coupling, and, surprise, there is a reaction.
>
> I am not sure how we ended up here, but let me stress again that for
> me the problems with "trusted computing" are far worse than its
> inherent conflict with the free software principles in some of its
> applications.  I have given sufficient examples and rationale
> elsewhere.
>
> Thanks,
> Marcus
>
>
>
> _______________________________________________
> L4-hurd mailing list
> address@hidden
> http://lists.gnu.org/mailman/listinfo/l4-hurd




reply via email to

[Prev in Thread] Current Thread [Next in Thread]