l4-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Server granularity


From: Marcus Brinkmann
Subject: Re: Server granularity
Date: Sun, 16 Oct 2005 20:40:42 +0200
User-agent: Wanderlust/2.14.0 (Africa) SEMI/1.14.6 (Maruoka) FLIM/1.14.7 (Sanjō) APEL/10.6 Emacs/21.4 (i386-pc-linux-gnu) MULE/5.0 (SAKAKI)

At Sat, 15 Oct 2005 16:18:22 -0400,
"Jonathan S. Shapiro" <address@hidden> wrote:
> I would like to describe another difference between EROS/Coyotos and
> Hurd: our assumptions about the granularity of servers. From various
> statements that have been made by Hurd people in this discussion, it
> sounds like the Hurd structure is traditionally client/server: a server
> serves many objects. This design is very natural -- and probably
> unavoidable -- in a non-persistent system.

This is true.  More specifically, a single server serves many objects
to (potentially) many different clients, which belong to (potentially)
many users.

One reason for this design is that it enables sharing.  Of course,
there is no resource accountability in the Hurd on Mach.  Resource
accountability seems to make sharing tough.  Another thing to note is
that the Hurd strives for POSIX compatibility, and in POSIX a shared
resource will survive if any of the holders of the resource
terminates, even if they are killed with "kill -9" (which I interpret
as: The task is not allowed to execute even a single more instruction).

Here are examples:

The init server: One process, many objects.  The objects are used for
shutdown notifications.  I'm not sure a persistent system even needs
an init server :)

The process server: One process, many objects.  Beside the process
list, the process server provides the namespace for message ports:
Message ports are used to send signals directly to the process.  If
you have resource accountability, there is no pressure to have a
global process space that can be utilized by the system administrator
(the admin can just revoke the resources).  But if you want
process-to-process communication (signals), you need some shared name
server for that.

The authentication server: One process, many objects.  Implements
ACL-based authentication.  This is how it works: The client gives the
server some capability that the server can show to the authentication
server, along with a capability that the server wants to return to the
client (eventually).  The authentication server returns the client's
user ID to the server.  The client can get the server capability from
the authentication server.

The pipe server: One process, many objects.  In the Hurd on L4, I
planned to have one pipe server per user.  There is nothing that
speaks against having one process per pipe then.  I am not sure what
the impact is on pipes between processes of different trust domains.
Whose responsibility is to pay for the pipe resources?  Probably the
responsibility of the shell that creates the pipe in the first place.
However, this means that your I/O descriptors potentially can not be
trusted.

The socket server: Actually, in the Hurd this is the same as the pipe
server, as pipes are implemented as unix domain name sockets.

The filesystem: Eeew, let's not talk about that here :)

There are other servers, for example for device nodes like /dev/null:
One server, many objects.  For many of those, it could be done easily
on a per-user basis.  Note that nobody really tried to push for such
changes, so the Hurd is a bit careless.  However, for some purposes of
POSIX emulation, where sharing is desired, I think a shared server is
necessary.

Thanks,
Marcus





reply via email to

[Prev in Thread] Current Thread [Next in Thread]