[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Constructor v. Trivial Confinment
Jonathan S. Shapiro
Constructor v. Trivial Confinment
Mon, 01 May 2006 12:42:10 -0400
If we are to answer Marcus's challenge, the answer must necessarily
arise from some feature of the constructor that trivial confinement does
not provide. The place to start, then, is to understand what are the
actual features of the Constructor vs. the trivial confinement
If I do not completely understand Marcus's proposal, I hope that he will
correct me, and that he will do so that clearly states purpose but omits
advocacy. We should certainly discuss rationale and ethics in the
discussion of use cases. The purpose of *this* discussion is to first
understand mechanism with minimal distraction.
This message is long, and I apologize. It seemed important to be clear.
A constructor has 6 functions:
1. Instantiation: It creates new copies of some given program.
2. Identification: It identifies whether it was the constructor
that fabricated (in the past) some currently running process.
The operational importance of this operation is that it allows
us to know precisely what the initial capabilities of the process
were, and therefore to understand the possible future actions of
the process that might arise from execution starting at it's first
instruction. Most importantly, we know what program image (what
binary instructions) the process initially obeys.
3. Confinement attestation: It certifies with high confidence that
its created processes are (or are not) initially confined.
4. Half-blind attestation: It can answer a modified confinement
query of the form "here is a bag of permitted holes, are
the known (to the constructor) holes a subset of these?
This was designed to be used for system utilities, and it
is probably not of major importance. One *might* want to
use it, for example, to authorize access to some trusted
system process that generates random numbers, or other
system services that are "known to be innocent by design".
In practice, no other uses of this mechanism were seriously
contemplated, and I do not believe that the bag mechanism
introduces any power of program instantiation that does not
exist intrinsically in any system.
In practice, if the system service is truly known to be
innocent, there is no reason (other than convenience) to
pre-install it in a bag. You can just give it to the user
in their initial directory and let the user build their
Observation: any bag whose content is a subset of capabilities
that the user can get anyway is value-neutral. In this case,
the bag is merely a convenient bundling mechanism.
5. Encapsulation: it constructs programs in such a way that
they cannot be initially inspected by their instantiator. The
program may later *elect* to be inspected. Similarly, the
initial capabilities cannot be inspected while the constructor
It would be easy and harmless to add a bit that would permit
initial inspection, but I will describe important use cases
(later) that are impossible if inspection is mandatory.
I will assume that this bit exists in the rest of this
discussion, but I will not assume that it is unconditionally
set to "yes, disclose". This is one of the options whose
consequences I want to explore in use-cases.
The design issue here is based on a philosophy difference
that has security implications: I believe that disclosure
should occur only by consent. However, the argument for this
is much weaker for the initial capabilities than it is for
the subsequent state, and I have no objection to the "permissive
inspection" bit for the vast majority of programs.
Allowing inspection of initial, pre-execution state prevents
the program from holding initial secrets, but it does not prevent
the program from protecting state that is created at runtime.
Several of my use cases will rely on being able to hide
this run-time state.
6. The constructor checks that the space bank provided
by the instantiator is an "authentic" space bank. An authentic
space bank is one that honors the "newly allocated objects
are exclusively held" (the exclusivity property). This implies
that you can give me access to your space bank, and I can allocate
objects from it that you cannot read. You can destroy them, but you
cannot inspect the content that I place in them unless I agree.
This exists because it is a precondition for encapsulation, so I
will not discuss it separately. I mention it because (I think)
Marcus is explicitly rejecting this requirement for the storage
allocator. If I understand him, his main purpose for doing this is
to prevent encapsulation, so we will need to look very carefully at
the ethics of encapsulation when we consider use-cases.
The trivial confinement idea, as I understand it, includes 3 functions:
1. Instantiation: it allows new copies of programs to be made.
2. Confinement validation by inspection: the instantiator has
complete access to the capabilities that will go into the new
process, and it is therefore able to inspect them for
3. Explicit non-encapsulation: the party who "owns" storage always
has the right to inspect it.
[There are some purely technical subtleties with this that make
me think that this is not precisely the definition that achieves
what Marcus wants. The question is "what does own mean". I suggest
that it might be better to frame this property as:
The party who has authority to *destroy* storage has the
right to inspect it.
This may not be quite right either. The point is that *no one*
should be able to inspect *unallocated but authorized* storage.
This should be avoided so that the system can make opportunistic
use of unallocated pages.]
Neither definition is precisely right either because there is a
hierarchical relationship among storage allocators. This detail
should be clarified, but I don't think that it is central to
understanding what the two mechanisms do or do not enable.
With this introduction, let me try to compare the features so that we
will understand which features are functionally equivalent and which
introduce differences in functionality. I claim (without proof) that if
the feature sets are functionally equivalent then they are ethically
equivalent (though we might debate the sign :-).
Let me begin with a purely technical opinion that really has nothing to
do with features: even if we reduce the feature set of the constructor,
and mandate that it always be open to inspection, I believe that
*having* a constructor is a good design choice. The operation of
constructing processes is machine dependent, delicate, and reasonably
high frequency. Because of this, it is good to segregate this
responsibility into a separate object that manages this activity. This
is an argument about system structure, not system function. It is
motivated entirely by concerns of robustness and testability, and it is
independent of concerns of privacy or security.
I. Instantiation and Initial Confinement
I believe that the instantiation and confinement attestation (ignoring
encapsulation, which I will discuss below) of the two mechanisms are
equivalent. Both can instantiate, and both can demonstrate confinement
to the instantiating process.
Since this is true, then any operational differences that occur must
derive from some other function of the constructor. Let us examine them
in turn, and let me offer a second technical opinion: many of these
functions are extremely useful. If they can be recovered in the Hurd
without violating Hurd objectives, it is very much worth considering
The trivial confinement mechanism does not provide the identification
function at all. An instantiator A can create a process P, and can later
send to a peer process B an entry capability to P. B can elect to trust
A to describe what P is, but otherwise B has no way to independently
check the statement that A makes. Identification is important to many
use cases. Some of these may be of interest to the Hurd (which I will
try to examine separately). Others are definitely not. Let me try to
quickly give examples illustrating the good and the bad
Identification allows one process decide that it wishes to speak to a
second process only if it actually knows the *implementation* (that is:
the behavior, as opposed to the interface) of the second process.
On the negative side, this could be used by a DRM implementation to know
that is speaking to a particular, proprietary decoder implementation
(though this would be a stupid thing to do -- child-based confinement is
entirely sufficient to satisfy the DRM objective in this particular
On the positive side, this mechanism is *required* in order to implement
robust electronic money (it is fundamental to the currency exchange
operation). See "Capability-Based Financial Instruments":
I do not think that we want to go so far as to exclude electronic
commerce from "things that are of interest to the Hurd", which suggests
that we cannot realistically omit this in some form.
The trivial confinement mechanism does not permit encapsulation, because
the instantiator can read not only the initial state, but also the
run-time state of any process that it creates.
Encapsulation has a bunch of good and bad use cases. The obvious bad one
is DRM, and more generally, keeping secrets from the instantiator
(though note that the secrets cannot be bootstrapped securely if the
initial capabilities of the process are disclosed.
Strong use-cases that *require* encapsulation include electronic wallets
and *any* form of privacy enforcement in a shared access environment. I
will expand on both of these in my use cases, but here is a pointer to
Recall that "right to destroy != right to read" (which I called the
"exclusivity property" above) is a precondition to encapsulation.
Without this property, the runtime state of a program can be inspected
by its instantiator.
Note that the spacebank authentication test relies on the Identify
operation, though in that case the identification is not implemented by
IV. Half-Blind Attestation
I am honestly not certain that this feature is important in practice.
For patent reasons, EROS never implemented it, and we never had any
reason to regret that. In all honesty, the main reason that I would
implement it now is to ensure that it cannot be patented *again* in the
future in some variation -- particularly in some way that might be used
to restrict independent implementations of DRM (if it turns out that we
need to live with DRM, it is important to have a *lot* of alternative
Half-Blind attestation works as follows:
As each component capabilities of the "yield" process (the one
that the constructor builds) are added, the constructor assembles
a "bag" containing any capabilities that are *not* confined.
Later the instantiator can execute a variant form of the "is your
yield confined" query. The instantiator can supply a second
bag, and the constructor asks the bags to perform a subset test.
If the constructor bag contents are a subset of the bag that
came from the instantiator, then the instantiator is understood
to authorize all of the initial holes of the yield (the
I think that the key to thinking about this feature is to ask "Well, how
do those bags get created?" The one created by the constructor is pretty
clear. We need to focus on the one created by the instantiator.
In general, *anybody* can fabricate a new bag. You go to the bag
constructor and you ask it for a new bag and you start stuffing
capabilities in to it (in the documentation, the bag is known as a
KeySet). In this case, all of those capabilities originated with the
instantiating process, and there really isn't any new power here.
However, bag capabilities can be transmitted. This means that an
instantiator can elect (voluntarily) to delegate some of its
authorization decisions. This is done by using a bag supplied by a
By the way: the subset operation can be used by anyone. If an unknown
party hands me a bag, I can check whether it contains anything that is
not in *my* bag. I won't know *what* it contains, but I will know that
it contains something that I do not know about.
I do not think that this delegation, in the usual case, is a bad thing,
but perhaps I am missing a problem here for the Hurd objectives.
The *hazard* of the bag is that the system may be designed in such a way
that there is a system-wide bag of "presumptively trusted" objects, and
these might include a treacherous display. The problem is that you don't
really know *what* it includes.
And actually, sticking the treacherous display capability in a bag isn't
really the issue either (to me). I can still decide *not* to use the
treacherous display bag.
The real problem here is aggregation: if the system-wide bag is used in
such a way that it has one thing I really need, and a second thing that
I really do not want, and I cannot get the two things somehow separated,
then I have a real problem.
The aggregation issue can be dealt with by design. My opinion is that
the treacherous display capability is best handled by simply not
admitting it into the design in the first place. At that point, the bags
only support "socially reasonable" forms of trust delegation.
I think that the productive use case discussion must focus on the issues
of Identification or Encapsulation, possibly in some combination. The
bag doesn't really add any new power to the situation, and experience
suggests that we could simply disable this without any significant
- Constructor v. Trivial Confinment,
Jonathan S. Shapiro <=