[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Getting Started with Hurd-L4

From: Marcus Brinkmann
Subject: Re: Getting Started with Hurd-L4
Date: Mon, 25 Oct 2004 23:34:18 +0200
User-agent: Wanderlust/2.10.1 (Watching The Wheels) SEMI/1.14.6 (Maruoka) FLIM/1.14.6 (Marutamachi) APEL/10.6 Emacs/21.3 (i386-pc-linux-gnu) MULE/5.0 (SAKAKI)

At Mon, 25 Oct 2004 22:07:46 +0100,
Neal H. Walfield wrote:
> At Mon, 25 Oct 2004 22:50:03 +0200,
> Marcus Brinkmann wrote:
> > 
> > At Mon, 25 Oct 2004 21:30:12 +0100,
> > Neal H. Walfield wrote:
> > > The client does not give the container to the server as it would not
> > > be able to get it back.  The clients gives the server access to the
> > > container including the right to uninterruptibly lock the container
> > > for a time period.  (This way the server can be sure that the client
> > > will not remove the container while it is filling it or corrupting it
> > > before it takes a copy into its cache and allows the client to know
> > > that it can get its resources back eventually.)
> > 
> > I think we dropped the idea of using a notion of "time period" in this
> > context, consider "kill -9".  I also don't see a reason: If the
> > container access is revoked, the server can deal with that easily (I
> > don't know of a situation where the server must be guaranteed that the
> > server exists for a period of time).
>   ^do you mean client here?
> What about cache consistency?  The server has the device driver read
> data into a container.  When the device driver returns, the container
> has the data and the file system takes a COW copy into its cache as
> part of its extra page allocation.  If the client is able to
> manipulate the data between the DMA operation and the placement of the
> data into the cache, there is an opportunity for corruption.  (Hence
> the exclusive lock.)

I think this must be done solely at the privileged device driver level.
> If the client dies, until the server releases the container, who owns
> the guaranteed pages?  (I guess either the user or we keep a zombie
> around?)

physmem.  It will reclaim it as soon as the lock is released.
> > The only exception I know of is locking for DMA, and that is by nature
> > special, and a highly privileged operation, reserved for device drivers.
> This is a rather large exception and integral to the system as I
> understand it.

The difference here is if you expect an untrusted filesystem server to
perform the locking, or only trusted system code like device drivers.

It's a huge difference: We can expect trusted system code to
well-behave, so no client-defined "time period" is necessary.  The
device driver can simply lock the memory, have physmem unmap all
mappings to it, and prevent new mappings from being generated until
the DMA operation is completed (physmem also will not reorganize that
physical memory).

The whole locking thing would then be a private deal between two
trusted partners, the device driver and physmem.  It's completely
different and a lot simpler than a similar deal between the
(untrusted) client and the (untrusted) filesystem server, as you
seemed to propose at some time.
> > I am not sure about the cross-CPU issue.  The scheduler doesn't have
> > any idea about what IPC operations are occuring, and a server thread
> > can only run on one CPU while it will deal with many clients that can
> > run on different CPUs (assuming we don't have one server thread per
> > CPU, which is what the L4 people recommend, but which has its own
> > issues).
> There are two concepts here that I need clarified:
> If you do an IPC, do you not donate the rest of your time slice to the
> receiving process (assuming you don't block).  (Hence the scheduler is
> not invoked.)

That seems to be true.  However, like for ThreadSwitch, I'd expect
this not to be done if the two threads reside on different processors.
L4 will never migrate threads from one CPU to another.  So, both
threads already have to be on the same CPU for this donation to take

(The spec does not actually say if and how donation takes place at
IPC.  But the above is what we can conclude from what makes sense and
what is actually being said elsewhere in the spec.)

> Second, assuming the server thread is ready (i.e. in receiving
> "mode"), it is not running--it is blocked waiting to receive a
> message.  Hence, it is not on a CPU (it may have a last CPU where it
> has a small cache footprint but that is different as I understand it).

It seems to me that there is a fixed thread-to-CPU mapping independent
of thread state, that can only be changed by migrating threads using
the Schedule system call.


reply via email to

[Prev in Thread] Current Thread [Next in Thread]