bug-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: emulating no-senders notifications in L4?


From: Niels Möller
Subject: Re: emulating no-senders notifications in L4?
Date: 20 Dec 2001 00:37:34 +0100
User-agent: Gnus/5.09 (Gnus v5.9.0) Emacs/21.1

I wrote, some time ago:

> To me, it seems that for resource cleanup to happen automatically and
> reliably, all resources must to be registered with the task-server,
> and there must be no way to unregister a resource without also
> destroying it or giving up access to it. To me, this seems to imply
> that all access to the resources must be indirect.

...

> This sounds a lot like having a mach-port-server that handles all ipc.
> Is that the way to go? How efficiently can the extra redirection be
> handled in L4?

I been thinking a little more about how to do this, I think I now
understand how to do port-rights on L4, and I'd like to share that.

We have a single "port-right server" that is on the path of all rpc:s.
It is ultimately trusted by all other processes. It has book-keeping
rpc-calls for creating and transferring ports and port rights, and for
generating reliable no-senders notifications, but I won't say anything
about those here. What I want to talk about now is an rpc between two
other tasks, which is a "simple" rpc that doesn't transfer ports or
port-rights between the tasks. I believe this class of "simple" rpc:s
is what matters for performance.

The port-rights server can be quite simple. It has two essential data
structures:

  struct port
  {
    /* Perhaps the owner field should not be a task_id, but instead  
     * a thread_id corresponding to a dedicated thread in the owner task.
     */
    task_t owner;
  };
  
  struct port
  all_ports[MAX_PORTS];
  
  struct port_right
  {
    unsigned port;
    task_t owner; /* As above, perhaps a thread_id instead */
    enum { normal, send_once, invalid } type;
  };
  
  struct port_right
  all_port_rights[MAX_PORT_RIGHTS];

A port is identified by its (unsigned) index in the array all_ports.
Similarly, a port-right is identified with its index in the array
all_port_rigts. These id:s are globally unique for all tasks in the
system, but may be recycled.

I don't know how to multithread the port-rights server properly, so
I'll pretend that all its work can be done by a single thread. This
thread waits for and handles incoming rpc:s.

All rpc:s sent to the port-rights server (and that is more or less *all*
rpcs in the entire system) have to include one mandatory argument: the
id of the port-right the sender wants to exercise. This makes the
forwarding efficient. When an rpc is received by the port-rigts
server, it takes the following steps:

1. It checks that the port-right is not out-of range, and that the originator 
of the rpc matches the owner of the
   port right.

     if (pr >= MAX_PORT_RIGHTS
         || sender != all_port_rights[pr])
       fail;

2. It checks that the type of the port-right isn't invalid. If the
   type is send-once, it is degraded to invalid.

     switch(all_port_rights[pr].type)
       {
         case invalid:
           fail; break;
         case send_once:
           all_port_rights[pr].type =invalid;
           /* Fall through */
         case normal:
           /* Normal processing */
       }

3. From the port right, it gets the port and the ports owner.

      /* This could even be cached directly in the port_right struct,
       * to save a memory reference */
      all_ports[all_port_rights[pr].port];

4. It forwards the rpc to the owner of the port, waits for the reply,
   and forwards the reply to the original sender. The message need not
   be modified in any way, but it might make sense to replace the
   first argument (that which was the port-right-id) with the
   recipient's port id, as that is more useful for the receiver to
   multiplex on.

(Of course, its the "wait for reply" in step 4 that makes
multi-threading necessary. An alternative to heavy multi-threading
might be to use only rpc:s with a very small timeout; that would
increase the work needed by communicating tasks, and the total number
of rpc:s).

All the recipient of the message has to check is that it is sent by
the port-rights server (and a kernel feature saying that one accepts
rpc only from a single task/thread might be simple and possible).

So what is the cost of all this? I estimate that the work needed for
steps 1-3 is very small, at most a dozen or two of machine instructions.

So the cost is

* four L4 context switches,

* two L4 rpc:s (each a send and receive),

* the code needed to pass on a message and its reply

* a dozen of instructions for doing the security checks

* one register less for passing useful data in each rpc

* the port and port-rights arrays would use some cache-space more or
  less constantly.
  
Important questions:

Can the scheme above be improved (without adding security features
like port-rigts to the kernel)?

Is it "efficient"? The primary target for comparison is of course
Mach. For those Hurd rpc:s for which the separation of message and
reply isn't essential, the above should be compared to *two* Mach rpc
calls. Another important target for comparison are doors, as
implemented in Solaris and Spring, as those are claimed to be really
fast *and* seems to have the security features the Hurd needs.

Is this service good enough? Does it provide the kind of port-right
abstraction that the Hurd needs?

Best regards,
/Niels



reply via email to

[Prev in Thread] Current Thread [Next in Thread]