l4-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Confinement (even with TPMs) and DRM are not mutually exclusive


From: Eric Northup
Subject: Confinement (even with TPMs) and DRM are not mutually exclusive
Date: Tue, 6 Jun 2006 11:13:55 -0400

I have spent much time recently thinking about the ways that the
object/security model used in KeyKOS/EROS/CapROS/Coyots and
DRM-enabling models could be separated.

I have been very concerned to see the discussions leaning towards
abandoning the security benefits associated with the design patterns
from KeyKOS and its descendants.  On the other hand, I understand why
people feel that can't support a system which enables DRM that limits
user freedom.

I think there may be a design which supports both goals.

It seems to me that DRM applications have two requirements:

1) Private storage for crypto keys and the cleartext of the protected
data.

2) Private communication channels to trusted output devices, so
that the protected data isn't captured.

Several desirable scenarios have been identified which require #1 -
storing the users' crypto keys, client programs providing sever
programs with storage, etc.

#2 seems rarer to me among desirable programs, and might be an
appropriate place to put restrictions.

There are situations where programs want to know that they have a
*mostly* private communication channel to an output device.  For
example, a spreadsheet which stores patient information in a medical
practice must be careful that random applications don't take
screenshots or steal their clipboard contents.  Also, password entry
dialog boxes, etc.  But these applications do not want to prohibit the
*user* (ie, the shell) from taking screen dumps.  They want to protect
their data from other applications (including, perhaps, the
application which initiated their execution) rather than the user.

So the HURD could implement the Constructor, and even some TC-style
trusted-path mechanism which would allow applications to validate
capabilities.  But, it should allow some policy decisions about which
applications can validate which capabilities.

For example, if any application could validate that it is
communicating directly with a trusted Space Bank and a trusted sound
driver, then that application can implement DRM for audio.  But if we
only allow it to validate that it is communicating directly with a
trusted Space Bank and directly with *the user's session to a sound
driver*, then it can not implement DRM for audio, since the user can
direct her session to dump the raw audio stream.  The same guarantee
applies to video, but with the graphics driver / window system.  We
can safely provide authenticated direct access to input drivers (for
entering passwords/keys/etc).


Tentative design sketch follows.


Capabilities that can be Authenticated:

Space Bank.

  (Described in earlier threads already)

Human Input Device.

  Sends messages corresponding to inputs from keyboard/mouse/etc.  It
  might allow "filters", so that some inputs from the human are not
  transmitted (for example, <ALT>-<TAB> to switch windows, and various
  other security-relevant hotkeys), but it does *not* transmit
  synthetic messages (which did not originate from human input).

Output Device (window system session, audio output, printer, etc...)

  I'm not sure exactly what guarantees we want to make here, but
  probably they would include:

      -If output is monitored/logged, it is done with the explicit
       approval of the user's shell.

      -Some devices may offer limited guarantees of exclusivity.  For
       example, that while printing a contract, no other program can
       insert the word "not".  Or that other programs can not change
       the display of a window (but rather, they must display their
       content in separate windows).

-Eric




reply via email to

[Prev in Thread] Current Thread [Next in Thread]