[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: "Towards a New Strategy of OS Design" revisited

From: Jonathan S. Shapiro
Subject: Re: "Towards a New Strategy of OS Design" revisited
Date: Wed, 26 Oct 2005 23:08:32 -0400


Thank you, this is very useful. I was looking for something that an *end
user* might care about, but these are worth talking about too.

Unfortunately, I started typing comments before I finished reading your
note. I think that we are in basic agreement, but I decided to keep the
comments because (a) in some cases I have suggestions for how the goals
might be framed more precisely, and (b) in some cases you were
excessively generous and gentle. :-)

My personal reactions:

> |  The GNU Hurd, by contrast, is designed to make the area of system   |
> |  code as limited as possible.

This was considered "motherhood and apple pie" by 1960. I can show you
quotes from the 1960 NATO Conference on Software Engineering that
advocate this (I used some as chapter quotes in "A C++ Toolkit"). This
requirement appeared in the TCSEC standard in 1985, long before Hurd was
an idea. This is a good goal, but it is not in any way unique to the

> The rest of the system is replaceable dynamically.

This is a really good goal. The reason it is a good goal is that it
allows developers to build clever new extensions that will have visible
impact on usability for end users, but are *safe*.

I propose that we should revise this goal slightly:

   The system allows the greatest degree of replaceability and
   extensibility that can be achieved consistent with the
   preservation and protection of user control.

I do not mean this in the trivial sense that all extension should be
denied. I am trying to say that we really should make the system as
extensible and open as possible, and that we should stretch the
boundaries of extensibility, but that keeping the user in control should
be viewed as a more important goal than arbitrary extensibility.

> +----------------------------------------------------------------------+
> |  [Users] can easily add components                                   |
> |  themselves for other users to take advantage of. No mutual trust    |
> |  need exist in advance for users to use each other's services, nor   |
> |  does the system become vulnerable by trusting the services of       |
> |  arbitrary users.                                                    |
> +----------------------------------------------------------------------+

As written, this goal is **in principle** unachievable. From Marcus's
later comments, it sounds like it was revised into something more

Aside: The term "trust" is a horrible term. It is meaningless and
misleading, and it should be universally replaced by either "relies on"
or "depends on" (according to context).

Back to Thomas's goal:

First, there are no users in computational systems. When was the last
time you opened your laptop case and found a user hiding in there? All
dependency relationships exist between programs. So the idea needs to be
re-stated in more precise terms.

Second, when one program A relies on any second program B, then A
depends on the correct execution of B in order to achieve the correct
execution of A. The idea that A can rely on B without depending on
(trusting) B is absurd.

For purposes of system architecture, I think that the questions are:

  + Can reliance be factored effectively? It is possible for A
    to rely on one aspect of B without relying on another aspect?
    Is there any fundamental classification of orthogonal dependencies?

  + Does the underlying system support what Norm Hardy has called
    "suspicious collaboration"? That is: collaboration where A is
    prepared to recover when B defects?

  + Is defection detectable? This is a precondition to recovery.

If these questions are reasonable, then each should be re-stated as a
goal for Hurd.

> These are the original two core objectives.  System code should be as
> limited as possible; users should be able to replace it dynamically
> with their own implementation.

This objective seems to be distinct from the two objectives that you
listed. What I read in the two objectives is: "the TCB (system code)
should be minimized", and "users should be able to extend the system
with new code". I do not see anything in these statements that says
"users should be able to replace the TCB".

And I want to suggest that this idea of user replacement of the TCB is
simply silly. If the TCB is successfully minimized, then all that
remains is the fundamental system communication mechanism and the
fundamental system protection mechanism. If the user can replace either
of these, then there simply *isn't* a TCB in the first place.

I think that the goal you are really trying to capture might be stated
better as:

  The policies imposed by the system should be the minimally
  sufficient policies to ensure robust, recoverable, and bootstrap-ably
  secure computing. All other policies should be defaults that are
  user replaceable.

This is a good goal. Better still, it can actually be achieved! Even
better: it implicitly defines a metric for what things should be
replaceable and what should not.

> * Allow users to run arbitrary code without prior trust.
>   In terms of mechanism: Add confinement and constructors.  If done
>   properly, this goal _fully_ realizes the suggestion:
>   "No mutual trust need exist in advance for users to use each other's
>   services."

Confinement and constructors (and various other foundational factoring
tools) do not achieve this goal. What they *do* achieve is something
much closer to reality. They allow program A to execute untrusted
program B while being able to understand and control some important ways
in which B might violate the contract that is implicit in the

I do not know if this satisfies what Thomas meant. It seems to me that
there remain some kinds of trust that are unavoidable. If A relies on B
to produce a correct answer, then confinement isn't enough: you need
total correctness.

In practice, I think that confinement is good enough, because software
does not engage in arbitrary and unmotivated dependency in the real
world. In practice, there appear to be two kinds of dependency

  Program A depends on program B, but A has been tested against B
    and there is actually a well-founded basis for reliance. The
    challenge here is to make sure that B cannot be compromised
    after the fact.

  Program A causes program B to execute, but does NOT rely on the
    correct behavior of B and is prepared to recover. The relationship
    is "arm's length" and "suspicious". Example: a browser executing
    a plugin.

If we can achieve a system in which these two dependency relationships
can be successfully and naturally managed, that would be really
wonderful! I believe that it is feasible.


reply via email to

[Prev in Thread] Current Thread [Next in Thread]