l4-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: DRM vs. Privacy


From: Jonathan S. Shapiro
Subject: Re: DRM vs. Privacy
Date: Tue, 08 Nov 2005 09:41:13 -0500

On Tue, 2005-11-08 at 10:05 +0100, Bas Wijnen wrote:
> There are some steps in your reasoning which seem (to me) to be a bit larger
> than pure logic.

That would be very useful. My note was intended as a first attempt to
capture the flow of a design issue, not a final statement.

> On Mon, Nov 07, 2005 at 02:06:34PM -0500, Jonathan S. Shapiro wrote:
> > 1. If we remove the authenticate operation, then we lose the ability to
> > create mutually trusting federations of Hurd systems. A trusted
> > distributed system is only possible if each node can verify that the
> > other nodes support the expected behavior contract.
> > 
> > It is a feasible solution for the Hurd project to declare that it will
> > not support highly trustworthy federation of this kind, but there are
> > many *legitimate* uses of this mechanism. Consider, for example, a bank
> > server authenticating an ATM machine (which is simply DRM applied to
> > money, when you think about it). Or *you* authenticating your home
> > machine when you log in (which is DRM applied to *your* content).
> 
> If there is trust between people, then no digitally guaranteed trust is
> needed.  That is: if you want to build a cluster of machines with mutual
> trust, then you can do that by moving the trust from the software domain to
> the social domain. This will not be a problem for remote authentication of
> your own computer (in fact, I do it on a daily basis with ssh public key
> encryption).

I agree in substance, but I would add a minor caveat: you are exchanging
trust in *hardware* (the TPM/TCPA chip) for a combination of social
trust (you are trusting the remote administrator) and physical security
(you are trusting that only the remote administrator has potentially
threatening access).

>   In the case of ATM machines it may be a bit trickier, because
> you don't trust the administrator on the other machine.

Or the physical environment that it lives in.

To be honest, I don't see Hurd running on ATM machines any time soon,
but I *can* imagine Hurd being used in distributed gaming scenarios. One
of the endemic problems in distributed gaming is that the players cheat.
I don't care if the player hacks the game -- that is not what I mean by
cheating. But in my view, when a player signs on to a collaborative
gaming system, part of what they are saying is "I promise to play by the
rules of the game." Given the actually observed player behavior, it is
not unreasonable to check that this promise is being upheld.

> It is probably acceptable to demand this hardware authentication.  I think it
> is also acceptable if the Hurd doesn't support such systems.

It is always legitimate to ask a question. Nobody can be required to
answer.

But a third position is also possible, because the authentication
structure is layered. It is technically feasible to do the following:

  1. At the bottom level, have the hardware.
  2. Be willing to let the hardware attest that yes, we are
     running Hurd.
  3. Now that we know that we are running a trusted OS,
     further attestation of applications is provided by trusted OS
     software.

One option, at that point, is to do *offline* attestation. Design the OS
so that it will only attest software that is signed by (e.g.) FSF. FSF
can now decide only to attest software that is compliant with some
license.

I think we will rapidly conclude that we want a system where arbitrary
organizations can sign off on a particular piece of binary code, with
the intended meaning that they endorse that code in some way. The
particular nature of the endorsement becomes a contract between the
endorsing party and the party testing the cryptographic endorsement. The
*semantics* of the endorsement is outside the scope of the technical
mechanism.

> > 2. If we disclose the master disk encryption key, then we similarly
> > cannot build highly trusted federations, and we expose our users to
> > various forms of search -- some legal, others not. I am not sure that I
> > want to build a system in which an employer can examine the disk of an
> > employee without restriction.
> 
> On any system I envision, any other system does not exist.  If the employer
> wants to be able to search without restriction, he can arrange that it's
> possible.  The problem here is that he doesn't need to install the system
> unmodified.  In fact, he doesn't need to install the system at all, he may
> choose a different system if spying is so important for him.

True, but the user can determine whether an authentic Hurd is being run.
The employer is free to set up a spyware-compatible environment. The
user is then free not to use it (with the possible consequence of
termination, but it is still a choice).

> > The basic scenarios of "secrecy by protection" all boil down to
> > variations on one idea: a program holds something in its memory that it
> > does not wish to disclose.
> 
> IMO this should only work as far as the user in control wishes it to work.
> The user in control is the one who created the constructor.

Then the user in control is me, personally.

> Right.  However, be able to prevent is not the same as must prevent.  That is:
> if the user (with sufficient authority) wants to debug a program, it should be
> possible.  Any program must be debuggable.

This sounds like dogma. *Why* should any program be debuggable? Remember
that there is no notion of UID, and the program has no way to know who
is debugging it.

> > The problem is that all software in a system is subject to the same
> > rules, and the software that implements DRM can hide its secrets too.
> 
> It cannot do that against the combination of administrator and user, which can
> debug any program they collectively start.

I believe that you persist in a deep misunderstanding of which
separations of ownership and control are technically feasible and that
you are reasoning from flawed premises. It could be, of course, that the
error is mine. I suggest that we need to expand on this part of the
discussion and find out.



shap





reply via email to

[Prev in Thread] Current Thread [Next in Thread]