l4-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Comments on the hurd-on-l4 document


From: Marcus Brinkmann
Subject: Re: Comments on the hurd-on-l4 document
Date: Wed, 08 Jun 2005 17:32:58 +0200
User-agent: Wanderlust/2.10.1 (Watching The Wheels) SEMI/1.14.6 (Maruoka) FLIM/1.14.6 (Marutamachi) APEL/10.6 Emacs/21.4 (i386-pc-linux-gnu) MULE/5.0 (SAKAKI)

At 08 Jun 2005 16:30:36 +0200,
Niels Möller wrote:
> Marcus Brinkmann <address@hidden> writes:
> > The most important changes are related to the capability system.  I am
> > convinced by now (and I think Neal agrees) that we simply can not
> > feasibly implement a capability system without support by a central
> > authority, either the kernel via its IPC system, or a trusted
> > capability server.
> 
> It's still appealing to be able to do it with rudimentary kernel
> support and all the rest locally in each task.

Well, in my opinion there really are potentially overriding security
aspects, too.  For example, in some security models, it is an absolute
no-no to let the server know which tasks have a handle to the
capability, or even if there are any users at all, to suppress covert
channels.  There are other aspects that require consideration.

> But if the kernel
> doesn't provide sufficient support, I guess the L4 way is to introduce
> a central userspace server to do it (i.e. we have to figure out what
> we would really like the "kernel" to do for us, and then implement
> that "hurd kernel" as L4 + a bunch of extra servers).

In the end, it all boils down to a mix of kernel features, trusted
system services and local management.
 
> Would all ipc go via the capability server (that would have quite
> severe performance implications), or will it be used in some other
> way?

This is a big question and obviously you want to optimize the IPC
patch thoroughly, which basically means the answer must be "no, the
ipc will be immediate".  But it is also obvious that _some_ sort of
translation and/or protection must happen.  My feeling is that we can
limit the information the kernel must provide on such special
capability objects is a single word, which must be looked up from some
table, the rest can be done in the trusted system server and locally.
How exactly all this has to happen depends heavily on the details of
course.

> > I can not pinpoint this on a single killer argument.  There are a
> > couple of things, among them:
> 
> > * In upcoming L4 designs, global thread IDs will be _gone_, and our
> >   design will not carry over without some fundamental changes anyway.
> 
> This sounds odd. When you want to send an IPC to a thread in a
> different address space, how do you tell the kernel which thread you
> intend? What you say seems to imply some kind of per-address-space
> table that maps of user-space id:s to some kernel thread objects,
> right?

Yes.  One solution is that the map/grant/unmap model will be extended
to communication pairs: You can map communication points to other
threads by sending them in a message, just like memory mappings.  The
receive end of such a communication pair is permanently fixed at
creation time.  Then each thread which has the communication point
mapped can send a message.  And the receiver can choose from which
communication end points to receive.

The receiver gets the information on which receive end point the
message was received, but he gets _no_ information about the sender.

Basically, there will be new primary objects at the kernel level, and
all resources are managed via map/grant/unmap.

> > The capability transfer mechanism I designed is a pure nightmare (I
> > have a race-free design, and it is horribly complicated, and can
> > hardly be optimized at all).
> 
> It looked ok to me; I think it's a good tradeoff to make capability
> transfer somewhat expensive and complex, if we gain performance for
> the much more common operation of *using* a capability.

But its even better if we can get both with simpler code :)  
 
> > * Task info capabilities are just an insane concept to go with in the
> >   first place.  They are a sad excuse for a real capability and/or
> >   notification server.
> 
> What we really need is a stable identifier for a communications
> partner. The problem with thread and taskid:s is that they can change
> meaning out of control of the tasks relying on the id. Task references
> is one way to deal with that. A purely local namespace, mapping local
> id:s to kernel objects, is a different solution. Mach ports are really
> such an id + a message queue; I just hope the L4 counterpart is more
> light weight...

I heard rumors that the performance of at least one new upcoming L4
design has in fact been measured and is exactly as fast as the
existing L4 X.2 implementation.

I should add however that even that new upcoming design is not
powerful enough IMO, so you may have to add the cost of another table
lookup to that until you get what we can achieve.
 
> > So, instead we are now looking at capability server designs and what
> > type of kernel extensions are necessary.  It seems that only a very
> > small extension to upcoming L4 designs may be necessary, but it
> > depends a lot on the exact details, so we try to talk to everyone
> > about it.
> 
> Is there any information available about the "upcoming" L4 design?

I promised not to pass it on, sorry.  Also, there are at least two,
one from Espen in Karlsruhe as his thesis, and one from Dresden.  I
only know some details from the former.  I think Neal had a link to
some slides about the latter, maybe he can remind me.

What's more important is that, I think, both systems fail to address
the problem of identifying objects associated with capabilities if the
capabilities are passed as arguments to an RPC, rather than used to
invoke the RPC on.

Ie, consider container_copy (dest, src).  The receiving server needs
to lookup the object corresponding to dest, and also the object
corresponding to src.  AFAICS, nobody did came up with a satisfying
solution to that problem.  Mungi uses random numbers as ID, ie
protected by sparsity.

> >From what I recall of vague earlier discussions, it would be neet to
> have some L4-"capabilities", where a capability simply means the
> ability to send a message to a particular thread, with some way of
> mapping or granting such capabilities to other tasks. As usual, we'd
> also need some way to implement no-senders notifications to the
> receiving end of a capability and (maybe less important)
> death-notifications to the sending ends. It's highly desirable to keep
> any central capability server out of the common case communication
> path.

You can separate these problems.  The first part, the capability to
send a message, being mapped and granted, this is what the new L4
designs contain.

The second part, the death notifications, that is the "reference
counting" problem, and should be optional (reference counting may leak
information and thus add covert channels).  There is a fundamental
issue here: Microkernel designs usually despise reference counting,
and require explicit destruction, which makes a lot of sense.  But the
question is not if you have reference counting at the lowest level,
but if you can add it on top of that, and this is where I have not yet
seen a convincing solution.  There are some aspects of the design
which are obvious, but consider this call to the capability server:

cap_get_ref (cap_server, cap)

The cap server needs to identify what the object "cap" is.  If this is
just a mapping, it can't know that.  So, again, you need to lookup
objects associated as caps when the caps are given as arguments.

Of course I am making a lot of assumptions here.  Some may not be valid.

> > About notifications: My current stance is that they are fundamental,
> > and have to be done right.  Instead of minimizing their use, I tried
> > to imagine what we could do if we had good notification support.
> 
> Cool. Notifications were an important part of the original Hurd
> design, and I always felt a little uneasy about the "all-ipc-is
> synchronous and blocking". I think performance is less important for
> notifications than for general capabilities, so it should be ok to use
> a central server for that. But note that a notification server which
> also queues messages is almost the same thing as a server implementing
> Mach ports...

I know and obviously that's not what we want.  So, assume that we do
not queue or even route messages through the cap server, but just use
the cap server for reference counting.  For example:

A wants to send a cap to B.  It maps the cap to B temporarily.  Then B
sends a message to the cap server, passing on the mapping of the cap,
and requesting its own reference and its own mapping.  Then A unmaps
the temporarily mapping of the capability to B.

At the surface, this is very similar to the current cap passing
protocol we currently have in mind.  But the actual implementation and
design details would be entirely different of course.  Also, B doesn't
need to get its own reference: It can use the capability before
replying to A, if it is OK that this can fail if A dies or revokes the
capability forcibly.

The actual message send would be done via a simple kernel system call,
and directly delivered to the communcation end point, not through the
client A, the cap server, or anything.

> To sum up, from the clues you give me, it seems like you want a
> heavy-weight almost-mach port concept for notifications, and a lighter
> form of ipc directly between tasks/threads, that relies on some
> lightweight and so far non-existing capability mechanism in L4.

I want a light-weight concept for message transfer and object
identification, and a reference counting cap+notification server.  The
notification part of the server would be comparatively heavy, but the
reference counting itself should be moderately light-weight.

There are issues, of course.  Consider a server returning a new
capability to a client.  If you want to do proper reference counting
and no-sender notification support, you must set it up in a way that
the client gives the server the capability to insert a new capability
into its name space.  So this is a bit expensive, like this:

Client creates empty "capability box" in the cap server.  Maps this
capability to server.  Server sends message to its cap server, passing
the new capability and the cap box as arguments.  The cap server
installs the capability into the client, as authorized by the cap box.
Note that the server does not need to get its own reference for the
cap box, although it could do so.  Then the server returns to the
client.  The client would then revoke the cap box object.

This is four RPCs, and more expensive than the current model.  Not
much you can do about it though.  In Mach this is solved because
capabilities get copied/moved in transit of the message, by the
kernel, so no races can occur.  We don't have the luxury if we
separate message passing from reference counting, obviously.

However, this example is also illustrative in how cheap cap transfer
can become: The cap box itself, which is transfered from the client to
the server and then to the cap server, does not cost anything beyond
manipulating the mapping database in the kernel.

Disclaimer: I may change my mind anytime :)

Thanks,
Marcus







reply via email to

[Prev in Thread] Current Thread [Next in Thread]