l4-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Separate trusted computing designs


From: Michal Suchanek
Subject: Re: Separate trusted computing designs
Date: Thu, 31 Aug 2006 14:22:42 +0200

On 8/31/06, Christian Stüble <address@hidden> wrote:
Am Mittwoch, 30. August 2006 21:14 schrieb Michal Suchanek:
> > > If you can enforce a property about a system, then it is not owned
> > > exclusively by another party.  That's a contradiction in terms.
> >
> > Maybe this depends on your definition of "owned" or that you have not
> > read my text carefully. I said that I want to enforce "my rules", not "my
> > properties". If the platform has certain properties that I accept (and
> > that imply that the platform owner cannot bypass security mechanisms
> > enforced by my agent), my agent will execute. Else it will not.
>
> You confuse two cases
>
> a) I want to buy a service, and to ensure that the service is provided
> in the way I requested I rely on TPM verification. I run my software
> on somebody else's hardware.
>
> b) I want to buy a service, and the provider requests that it runs on
> certain hardware/OS/PC color/whatever. Today I have the choice to
> access the service in any way I want. TPM would forbid it.
I do not see where I confuse cases. I was not talking about attestation
like in case a). My "Privacy Agent" is case b), but in this case I am the
provider and I request that the underying OS provides strong isolation
such that the platform owner (nor any other user) can access the state of my
agent. Today I have the only choice to give my personal data not to another
party, or to trust it. TPM allows to enforce certain properties. Yes. And in
my use case, this is what I want.

well, the difference between case a) and case b) is mostly in the side
on which you stand.


> > If a program requires a specific amount of memory (or a specific vga
> > adapter) for execution, does this mean that the platform is not owned
> > exclusively? In my scenario, the owner can define any security policy it
> > would like to. I will never be able to change this technically. But I can
> > decide whether my agent will be executed in this environment or not.
>
> But without TPM I am free to emulate the required hardware in any way
> I see fit. The program may break but if I emulate it well wnough it
> will work. TPM forbids this.
Trivial implementations may do so. But this consequence is not a must! The
idea of property-based attestation, e.g., is to provide more abstract
properties that can be attested/sealed to. This way, both more memory or
an appropriate emulation will work.

I do not see how that can be done. This is one of the flaws of TPM -
it only attests that the signed components are _exactly_ something,
and collecting the signatures of all possible variants and patches of
an OS will be a hard task.


>
> > > What you can do is to engage in a contract with somebody else, where
> > > this other party will, for the purpose of the contract (ie, the
> > > implementation of a common will), alienate his ownership of the
> > > machine so that it can be used for the duration and purpose of the
> > > contract.  The contract may have provisions that guarantee your
> > > privacy for the use of it.
> > >
> > > But, the crucial issue is that for the duration the contract is
> > > engaged under such terms, the other party will *not* be the owner of
> > > the machine.
> >
> > Is the owner of a house not the owner, because s/he is not allowed to
> > open the electric meter? If you sign the contract with the power
> > supplier, you accept not to open it. And it becomes a part of your house.
> > Now, you are not the house owner any more? Sorry, but I do not understand
> > why a platform owner alienates his ownership by accepting a contract not
> > to access the internal state of an application.
>
> You provide an example that appears to be similar on the surface but
> has different implications.
>
> The meter
>  - is not actively used by you, it is in fact used by the service
> provider, you only use the electricity which is in no way impeded by
> the meter (as opposed to poorly developed application that you use to
> access content)

Most importantly you can physically open the meter, only it violates
the contract, and you have to face the consequences. It is impractical
to do but certainly possible, unlike TPM enforced policies.

If my agent is executed on a platform of a different owner and it counts,
e.g., how often my email-address is used, it is exactly the same scenario.

And how is counting the number of times your address was used useful
(how is it more useful to do on somebody elses hardware)?

> > > > > > This is (except of the elementary security properties provided by
> > > > > > the underlying virtualization layer, e.g., a microkernel) an
> > > > > > implementation
> > > > > > detail of the appropriate service. There may be implementations
> > > > > > enforcing
> > > > > > strong isolation between compartments and others that do not.
> > > > > > That't basic
> > > > > > idea behind our high-level design how to provide multilateral
> > > > > > security: The
> > > > > > system enforces the user-defined security policy with one
> > > > > > exception: Applications can decide themselves whether they want
> > > > > > to continue execution
> > > > > > based on the (integer) information they get (e.g., whether the
> > > > > > GUI enforces
> > > > > > isolation or not). But this requires that users cannot access the
> > > > > > applications's internal state.
> > > > >
> > > > > That's incompatible with my ideas on user freedom and protection
> > > > > the user from the malicious influences of applications.
> > > >
> > > > I know. But this is IMO a basic requirement to be able to provide
> > > > some kind of multilateral security. A negotiation of policies
> > > > 'before' the application is executed.
> > >
> > > It's not a requirement to provide multilateral security, it is only a
> > > requirement for an attempt to enforce multilateral security by
> > > technological means.  Issues of multilateral security exists since the
> > > first time people engaged into contracts with each other.
> >
> > Yes.
> >
> > > The problem with negotiation of policies is that balanced policies as
> > > they exist in our society are not representable in a computer,
> >
> > Of course not. And nobody wants to replace the judge by the computer.
> > But if rights can be enforced technically, I prefer this solution over
> > good-will of the software vendor or the judges. Moreover, I think that
> > we often have to prove that a better solution exists to convice judges
> > to "ban" the not-so-good solutions. In the real world, even the bad
> > solutions solve some problems.
>
> But DRM will represent exactly the good-will of the software vendors,
> and it will technologically prevent any override by judges or anybody
> else.
Fortunately there is more than the technology. If providers ask to protect
their rights, and there will be a solution that helps the industry but
restricts some rights of users, nobody will care about. If you criticise
their solution, they will tell you and everyone else about the advantages of
their solution. Politicans will not understand.

The politicians will certainly not understand. The industry lobbists
can bribe them much better than I can. I wonder if the technology is
used to protect the rights of the industry or enforce something that
would not be enforcable by law or any other means (like CSS and
dividing the world into zones between which movies are incompatible).
Anyway it looks like the law converges to being completely defined by
the industry so this distinction may get meaningless in a while.


But if you can demonstrate that you can solve the same problems but
additionally protect the user rights, politicans will understand and they
will directly or indirectly force the other vendors to provide the better
solution. This is at least my experience in Germany. An example is the
improved reliability and security of newer Windows versions. Without
Linux showing that it is possible, imo nobody had ever tried to make
Windows more reliable and more secure. And this only because of customer
requirements...

It is because people have the option to use GNU/Linux, and Microsoft
had to compete with that. In no way related to politics. DRM allows
complete lock-in (ie documents only accessibly in the applications of
the same software vendor) so this does not apply when DRM gets
widespread.



>
> > > and
> > > that the distribution of power today will often do away with
> > > negotiation altogether.
> > >
> > > I think it is very important to understand what "balanced policies"
> > > means in our society.  For example, if an employer asks in a job
> > > interview if the job applicant is pregnant or wants a child in the
> > > near future, the applicant is allowed to consciously lie.
> >
> > Good example. In one of our papers we suggested to encrypt the PCR
> > values, and/or to allow users to lie about their values, and allow a
> > decryption only in the context of a conflict, e.g., at a judge.
>
> This means that there must exist some sort of 'judge-key' which is
> administered by somebody.
> So in this case the benefit of using systems with TPM over TPM-free
> systems is not obvious.
The key can be administered by the owner itself. In the case of a conflict,
s/he can prove that an appropriate environment was used by decrypting
the encrypted PCR values.


I do not see how this fits together. The owner of what?


>
> > > Thus, it is completely illusorical to expect that a balanced policy
> > > can be defined in the terms of a computing machine, and that it is the
> > > result of _prior_ negotiation.  Life is much more complicated than
> > > that.
> >
> > It is. But in my opinion computing machines can do better than today.
>
> But TPM is going to be deployed (or at lest attempted) on todays machines.
IMO platforms with TPM can do a little bit better than TPM-less platforms.

In that they can seal unbreakable contracts that are otherwise found
nowhere in the known universe :)
>
> > > Thus, "Trusted computing" and the assumptions underlying its
> > > security model are a large-scale assault on our social fabric if
> > > deployed in a socially significant scope.
> >
> > A unlogic conclusion, since TC does not aim to solve the problems
> > discussed above.
>
> It was developed at least partly to enforce otherwise unenforcable
> provisions.
Maybe. But maybe this property in only a technical consequence of more secure
systems?
I liked the approach with uninspectable memory as well. However,
through following the discussions here I came to the conclusion that
TPM and uninspectable memory is not the requirement for security. It
just shifts the party which you have to trust for your system to be
secure.
It looks like there are some cases in which this is useful but it
should not be needed for a general purpose system.

>
> > > > > It is also incompatible with the free software principles.
> >
> > I am sure TC is not. But some implementations based on it may be.
> >
> > > > What exactly is in your opinion incompatible with the free software
> > > > principles?
> > >
> > > From the current GPLv3 draft:
> > >
> > > "Some computers are designed to deny users access to install or run
> > > modified versions of the software inside them. This is fundamentally
> > > incompatible with the purpose of the GPL, which is to protect users'
> > > freedom to change the software. Therefore, the GPL ensures that the
> > > software it covers will not be restricted in this way."
> >
> > No problem, my "privacy agent" does not prevent users from installing
> > or running modified versions of software. And the underlying TCB does
> > not, too.
> >
> > But what about a Linux user that is not allowed to install software? This
> > can be configured by root. Is Linux incompatible to GPL-v3? Or only
> > certain configurations?
> >
> > The problem is that you can implement everything on a platfrom without TC
> > that the GPL-v3 wants to prevent. Most often, it is only a configuration
> > option. But does the GPL-v3 restrict users regarding the allowed
> > configurations?
>
> It allows to install software to at least one user: the
> owner/administrator.
And SE-Linux? Or if you disable the root account? Makes this your system
incompatible to GPL-v3?

On current PCs you can always install stuff if you physically control
the PC. And no Linux configuration prevents that.


>
> TiVo does not allow installing software at all. It runs Linux, you get
> the sources, modify to your liking, but unless you install software
> signed by the manufacturer it won't run.
I don't know TiVo. Uses it a TPM? Would it help to forbit TPMs?
Is it allowed to write GPLv3 SW on a ROM?

TiVo probably does not need TPM.

I'm not expert on GPLv3. I guess that these days ROMs are rarely used
because flash memories allow much easier upgrades.


>
> > > The views of the FSF on DRM and TC are well-published, and easily
> > > available.  For example, search for "TiVo-ization".
> > >
> > > What is incompatible with the free software principles is exactly
> > > this: I am only free to run the software that I want to run, with my
> > > modifications, if the hardware obeys my command.  If the hardware puts
> > > somebody else's security policy over mine, I lose my freedom to run
> > > modified versions of the software.  This is why any attempt to enforce
> > > the security policy of the author or distributor of a free software
> > > work is in direct conflict with the rights given by the free software
> > > license to the recipient of the work.
> >
> > What does the view say about a user that freely accepts a policy? I
> > _never_ talked about a system that "puts somebody else's security policy
> > over mine".
>
> I guess the discussion went way off.
>
> The original question was what useful uses do you (or anybody) see for TPM.
I have more, but until now I am not convinced that the "privacy" one is
useless.

>
> Not how bad it is. We all know it has flaws, and that hey will be
> probably abused if it is allowed to spread.
Again. There are exceptions, but in general, one should prefer to
forbid misuse of a technology instead of forbidding the technology at all.

But coming back to the original question seems to be a good idea.


OK, you said you can imagine TPM can improve privacy, although I
cannot find it in the long nested quotations anymore. But I still
cannot imagine a system that would do that and would not lose many
other aspects that I consider useful. Such as the possibility to use
the private information at all, and the ability to make backups of the
system.

Thanks

Michal

reply via email to

[Prev in Thread] Current Thread [Next in Thread]