[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: DRM vs. Privacy

From: Bas Wijnen
Subject: Re: DRM vs. Privacy
Date: Tue, 8 Nov 2005 10:05:37 +0100
User-agent: Mutt/1.5.11

There are some steps in your reasoning which seem (to me) to be a bit larger
than pure logic.  If I find the time, I'll let you know where exactly, just
some quick comments for now.

On Mon, Nov 07, 2005 at 02:06:34PM -0500, Jonathan S. Shapiro wrote:
> 1. If we remove the authenticate operation, then we lose the ability to
> create mutually trusting federations of Hurd systems. A trusted
> distributed system is only possible if each node can verify that the
> other nodes support the expected behavior contract.
> It is a feasible solution for the Hurd project to declare that it will
> not support highly trustworthy federation of this kind, but there are
> many *legitimate* uses of this mechanism. Consider, for example, a bank
> server authenticating an ATM machine (which is simply DRM applied to
> money, when you think about it). Or *you* authenticating your home
> machine when you log in (which is DRM applied to *your* content).

If there is trust between people, then no digitally guaranteed trust is
needed.  That is: if you want to build a cluster of machines with mutual
trust, then you can do that by moving the trust from the software domain to
the social domain.  This will not be a problem for remote authentication of
your own computer (in fact, I do it on a daily basis with ssh public key
encryption).  In the case of ATM machines it may be a bit trickier, because
you don't trust the administrator on the other machine.

It is probably acceptable to demand this hardware authentication.  I think it
is also acceptable if the Hurd doesn't support such systems.

> 2. If we disclose the master disk encryption key, then we similarly
> cannot build highly trusted federations, and we expose our users to
> various forms of search -- some legal, others not. I am not sure that I
> want to build a system in which an employer can examine the disk of an
> employee without restriction.

On any system I envision, any other system does not exist.  If the employer
wants to be able to search without restriction, he can arrange that it's
possible.  The problem here is that he doesn't need to install the system
unmodified.  In fact, he doesn't need to install the system at all, he may
choose a different system if spying is so important for him.

So I think this is a battle which cannot be won.  Therefore it is better to
not even try, and instead build a system which does what the owner of the
machine wants (which is this case is the employer).

> The basic scenarios of "secrecy by protection" all boil down to
> variations on one idea: a program holds something in its memory that it
> does not wish to disclose.

IMO this should only work as far as the user in control wishes it to work.
The user in control is the one who created the constructor.

> It seems obvious that we *want* programs to hold such secrets in some
> cases. ssh, for example, must be able to hold a cryptographic key
> without fear that the key will be exposed.

Right.  And the only reason that the program works as the programmer thought
it would is because the *user* wants it to.  That is, if the user (who has
access to the private keys which it gives to ssh) wants to spy on ssh, he
should be able to.  However if some other user or program wants to do that, it
should of course not be possible.

> I think it is pretty clear that in a multiuser system we *must* be able
> to prevent debugging. We do not, for example, want ordinary users to be
> able to debug "sudo" (or its equivalent). In any system where privacy is
> desired, we do not want the system administrator to be able to examine
> arbitrary programs either.

Right.  However, be able to prevent is not the same as must prevent.  That is:
if the user (with sufficient authority) wants to debug a program, it should be
possible.  Any program must be debuggable.  I think this will pretty much
break DRM without breaking privacy: The user who wants to use a work in a way
that the copyright holder does not approve of can debug the program used to do
things that are approved.  The copyright holder does not have an account, and
in particular does not have the ability to restrict distribution of
capabilities which allow debugging of his trusted software.

> The problem is that all software in a system is subject to the same
> rules, and the software that implements DRM can hide its secrets too.

It cannot do that against the combination of administrator and user, which can
debug any program they collectively start.

> Since we must permit the system administrator to install non-disclosing
> programs, I do not see a way to prevent the administrator from
> installing a non-disclosing program whose purpose is to implement DRM.

A constructor can be built which specifically permits debugging.  A space bank
can be built which allows this.  If I start a process with my specially
crafted meta-constructor and my prime space bank, it can verify that it is
completely authentic according to its trusted input.  However, that input was
trusted by the user, not by the content provider.  And so it does what the
user wants it to do, not what the DRM authority wants.

In short, the only way to enforce DRM is to limit the work to a trusted
system.  That is: if I want to restrict rights on my work, I should make sure
that it cannot be transferred over the machine boundary.  Of course that means
that it shouldn't be displayed on screen either, which makes it pretty
useless. :-)

But rereading your quoted text above, it seems that you mean that the
administrator helps enforcing the DRM.  If that is the case, it will work.
The administrator can obviously prevent anyone to do anything.

> The best I think we can do is alert the system administrator that such
> programs do not always operate with the user's interests in mind.

I don't think this will help anything at all.  Even if the administrator fully
supports our view, most likely his response will be that the program is needed
anyway because the users want access to this data.  He may however agree that
it should be debuggable. :-)


I encourage people to send encrypted e-mail (see http://www.gnupg.org).
If you have problems reading my e-mail, use a better reader.
Please send the central message of e-mails as plain text
   in the message body, not as HTML and definitely not as MS Word.
Please do not use the MS Word format for attachments either.
For more information, see

Attachment: signature.asc
Description: Digital signature

reply via email to

[Prev in Thread] Current Thread [Next in Thread]