[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Part 2: System Structure

From: Bas Wijnen
Subject: Re: Part 2: System Structure
Date: Thu, 25 May 2006 09:35:57 +0200
User-agent: Mutt/1.5.11+cvs20060403

First of all, in this e-mail (and probably for later e-mails as well), when I
talk about parents, it is about the process holding the capability to the
space bank which is the parent of the space bank which stores the process in
question.  This is not the definition of parent I used before, and it is IMO
not a very logical one, but since we're mostly talking about space banks, it
seems useful.  Also, it is what everybody else seems to call a parent at the
moment. :-)

On Wed, May 24, 2006 at 11:55:40AM +0200, Pierre THIERRY wrote:
> > > In all your scenario, you seem to omit something: without the
> > > constructor mechanism, no process can verify anything accurately
> > > about any other process, except if all of the parents of it are to
> > > be trusted.
> > This is not quite correct.  If a process gives me a capability, I can
> > check things about it, no matter what the parents of the process are.
> But this is not what I'm talking about. I'm talking about the process
> itself checking that a capability it holds is really what it seems to
> be.

Ah sorry, I misunderstood you then.

> That's theoretically impossible to check in a trustworthy way if the
> parent of that process has read/write access to the storage of the
> process, because that parent could tamper anything.

Yes.  There's a reason that a process is started.  That reason is another
process (perhaps acting as an agent of a user).  That reason will know what it
is doing.  If that reason wants to spy on the program, then it's none of the
program's business to disallow it.

In other words, if you think a process cannot trust its parent, you chose the
wrong parent.  If we do want to allow opaque user-provided storage, then the
user session is the parent of your space bank.  That is part of the TCB, and
does not function as an agent of the user (in particular, it doesn't allow the
user to look at the memory or change it).

> > For opaqueness, the chain of parents space bank-wise (of the process
> > implementing the capability, not of the one providing it) must be
> > trusted.
> But how could a process check what they are? That it is indeed under a
> chain of trusted space banks?

For trusted space banks, that's simple: session manager->user session->safe
space bank.

> Remember: you have to find a way that is tamperproof from the parent.

No, I don't.  A program must not have any protection against the process which
provides its code (this is usually the same process as the parent).  The
parent itself can decide if it is useful to put the program in a "real" or
fake environment.  In the latter case, it'll have a reason for it.  You seem
to want protection for the programmer against installers.  This is nonsense.
The installer can change the code anyway, so there is no protection in the
end.  These are just barriers which make good behaviour (debugging, reverse
engineering) hard, without any benefit for the user.

> > Luckily this chain of parents is usually short.
> Whatever be the length, it has to be checked anyway. And I'm not sure
> there's anything to back up this assumption. The chain could be
> arbitrarily long in some cases.

It could, but here's how it looks:
session manager->user session main bank->user session quota bank->user session
quota bank->....->user session quota bank->safe memory.

There are only two programs which are a parent (the session manager for the
primary space bank, and the user session for all the quota banks).  If not,
then the space bank can't be guaranteed to be opaque.

> > In my model, all space banks are either used for creating sub-space
> > banks, or for actual data (and code).  The former type are all owned
> > by user sessions (and the session manager), the latter by programs.
> That's not true anymore in the case of virtualization.

In case of virtualization, there are two options:
- The server is itself virtualized as well.  That is, it is in the same
  sub-Hurd.  (You don't need a full-blown sub-Hurd for virtualization, but
  this terminology makes things more clear, I hope.)  In that case, it
  _shouldn't_ be able to see that the memory is readable.
- The server is not virtualized.  In that case, it will not get an opaqueness
  guarantee.  That's good.

> > > That is, except for a process spawned by the TCB, no capability can
> > > be trusted not to be faked or sniffed.
> > 
> > No no, this is not how it works.  It doesn't matter at all who spawned
> > the process. It only matters who owns the space bank. In my model,
> > almost all space banks are owned by the TCB.
> Are you saying you're not refering to the model described by Marcus? If
> not, could you please describe very accurately your model, so that we
> can see how you want it to work?

I did describe the model in detail.  I don't think Marcus did, so I'm not
entirely sure what his model is.  That's why I'm talking about "my" model.

To summarize, my model has the following structure:
All space banks provide read and write access to sub-space banks created from
The primary space bank is owned by the TCB.  A part of that is the session
manager.  User sessions are created directly from the prime space bank.

The rest is my personal idea, as far as I know.  That is, I haven't seen
Marcus say this explicitly.  I'm not saying he doesn't agree (nor that he

Programs started by the user are running on sub-space banks of the session
bank.  Programs started by other programs have two choices: Usually, they will
run as siblings of their requestor.  However, "trivial confinement" is always
possible (the program running exec() on some of its space).  If that happens,
they run as children of their requestor.

So far, this does not include support for opaqueness.  If we do indeed want
this, it can be added by modifying the user session in a way that it allows
giving out opaque memory (in the sense that it doesn't allow the user to touch
it), and to check that it does this.  When I'm talking about opaque memory,
I'm talking about this mechanism (unless I specifically say I don't).

I much prefer this over Jonathan's model for some reasons.  I'll speak about
them in a reply to his e-mail.

> > > Am I wrong on anything here?
> > You seemed to be forgetting that without a constructor, we can still
> > have an "identify" operation.
> I don't see how your proposal enables a process to check anything
> accurately and in a tamperproof way about it's environment. In your
> model, it is mandatory for a process to trust all of it's parents.
> In the ping or competition case, that's not possible.

It is.  The parent space bank is the user session, which is not under user


I encourage people to send encrypted e-mail (see http://www.gnupg.org).
If you have problems reading my e-mail, use a better reader.
Please send the central message of e-mails as plain text
   in the message body, not as HTML and definitely not as MS Word.
Please do not use the MS Word format for attachments either.
For more information, see

Attachment: signature.asc
Description: Digital signature

reply via email to

[Prev in Thread] Current Thread [Next in Thread]