[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Part 2: System Structure

From: Bas Wijnen
Subject: Re: Part 2: System Structure
Date: Fri, 19 May 2006 11:23:28 +0200
User-agent: Mutt/1.5.11+cvs20060403

On Thu, May 18, 2006 at 06:46:06PM -0400, Jonathan S. Shapiro wrote:
> > I expect this costs performance (for setting up the address spaces all the
> > time).  I see this is useful in case recovery is indeed possible, but in
> > many cases I don't see the use of it.
> The experience in KeyKOS is that yes, (a) there is a cost in
> performance, but (b) it is offset by other simplifications that are made
> possible by this structure. Taken overall, the user sees no loss of
> performance.

I don't see where such performance gain would be, but since you have
experience with such things and I don't, I'll believe you.

> You have a system in the field. It is doing something strange that you
> cannot reproduce, and your customer wants it fixed. You would like to be
> able to instrument the suspected parts of the system in-situ so that you
> can see what is actually happening.
> Of course, you cannot do this effectively unless the suspected parts are
> decently isolated components.
> There are also significant advantages for live upgrade.

That all sounds useful indeed.

> > So what would be an example of a single-client server, which does not run
> > on the space bank of the same user as its client?
> In practice, they all seem to run on a *child bank* of their requestor's
> bank (for ease of destruction), but I don't think this alters your
> question.

Well, it does, because it makes the alternate approach that I was thinking of
possible, by making the process a sibling instead of a child (in space-bank
terms) without "but then the wrong party is paying for the storage" problems.

If you do indeed want quotas, you just add an extra space bank:

session bank
 \_ quota bank
     \_ program bank (no quota set)
     \_ subprogram 1 bank (no quota set)
     \_ subprogram 2 quota bank
         \_ subprogram 2 bank (no quota set)

This is a schematic of one program with two subprograms, running in a session,
where subprograms 2 has a quota of its own.

When the program wants to start a new subprogram, it has two options.  If the
subprogram needs a quota, it first derives the quota bank from its own quota
bank, and then the program bank from that.  If not, it derives the program
bank directly from its quota bank.  The reason for this quota bank is that
the subprogram may want to start subprograms of its own, and they may need to
be encapsulated as well.

The results of this are:
- The subprograms are encapsulated from the program starting them.  If this is
  not desired, they can be started directly from the program space bank, of
- Each part can have its own quota.  When the quota on the main program's
  quota bank is adjusted, all subprograms are affected, just as they would be
  in the opaque-space-bank-where-subprograms-live-in-sub-space-banks system.
- The memory is still transparent (and thus debuggable) to the user.

> The problem in the file system is not one person running from another
> person's storage. The problem is the *commingling* of storage from
> multiple sources.

Of course.  The thing is that what our proposal (transparent space banks)
makes impossible is for one user to give a space bank to an other user without
the right to look at it himself.  For programs this is still possible.
However, the user owning the space can still look at it.  But if that's the
same user for both programs, that's no problem.

So in order to show that this limitation is actually a problem in practice, we
have to come up with an example where multiple users are involved.

The game competition is such an example, but I'm starting to think that it's
actually reasonable if the user who organises the competition must pay for the
storage.  DoS attacks can effectively be blocked by giving every user (at
session creation time) a capability to a unique object which allows exactly
one instantiation of the game at a time.  That way, the number of games played
simultaneously is at most the number of users.  This can be expanded to play
at most one game from a list at a time (more competitions are organized in
parallel).  The maximum amount of storage used from the organizer is then the
number of sessions times the largest competition game.  It is a good example
in the sense that I might actually want to support it.  However, I think it's
not important enough to sacrifice transparency for (especially because there
seems to be a workable solution).  If there are several more similar examples,
that could change, of course.


I encourage people to send encrypted e-mail (see http://www.gnupg.org).
If you have problems reading my e-mail, use a better reader.
Please send the central message of e-mails as plain text
   in the message body, not as HTML and definitely not as MS Word.
Please do not use the MS Word format for attachments either.
For more information, see

Attachment: signature.asc
Description: Digital signature

reply via email to

[Prev in Thread] Current Thread [Next in Thread]