[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Getting Started with Hurd-L4

From: Marcus Brinkmann
Subject: Re: Getting Started with Hurd-L4
Date: Mon, 25 Oct 2004 22:50:03 +0200
User-agent: Wanderlust/2.10.1 (Watching The Wheels) SEMI/1.14.6 (Maruoka) FLIM/1.14.6 (Marutamachi) APEL/10.6 Emacs/21.3 (i386-pc-linux-gnu) MULE/5.0 (SAKAKI)

At Mon, 25 Oct 2004 21:30:12 +0100,
Neal H. Walfield wrote:
> The client does not give the container to the server as it would not
> be able to get it back.  The clients gives the server access to the
> container including the right to uninterruptibly lock the container
> for a time period.  (This way the server can be sure that the client
> will not remove the container while it is filling it or corrupting it
> before it takes a copy into its cache and allows the client to know
> that it can get its resources back eventually.)

I think we dropped the idea of using a notion of "time period" in this
context, consider "kill -9".  I also don't see a reason: If the
container access is revoked, the server can deal with that easily (I
don't know of a situation where the server must be guaranteed that the
server exists for a period of time).

The only exception I know of is locking for DMA, and that is by nature
special, and a highly privileged operation, reserved for device drivers.

I am not sure about the cross-CPU issue.  The scheduler doesn't have
any idea about what IPC operations are occuring, and a server thread
can only run on one CPU while it will deal with many clients that can
run on different CPUs (assuming we don't have one server thread per
CPU, which is what the L4 people recommend, but which has its own

For system services like physmem and device drivers, I can very well
imagine one designated server thread per CPU.  For untrusted servers,
it could be nasty.  However, as the system services are used so much,
I'd expect that this would be an important optimization on SMP


reply via email to

[Prev in Thread] Current Thread [Next in Thread]