[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Thread model

From: Neal H. Walfield
Subject: Re: Thread model
Date: Tue, 11 Mar 2008 12:10:17 +0100
User-agent: Wanderlust/2.14.0 (Africa) SEMI/1.14.6 (Maruoka) FLIM/1.14.8 (Shij┼Ź) APEL/10.6 Emacs/21.4 (i486-pc-linux-gnu) MULE/5.0 (SAKAKI)


> The real solution here of course is to fix the thread model

I fully agree that given Mach's architecture, one kernel thread per
extant RPC is the wrong approach.

> using some kind of continuation mechanism: Have a limited number of
> threads (ideally one per CPU) handle incoming requests.  Whenever
> some operation would require blocking for some event (in the case of
> diskfs, waiting for the underlying store to finish reading/writing),
> the state is instead saved to some list of outstanding operations,
> and the thread goes on handling other requests. Only when the event
> completes, we read the state back and continue handling the original
> request.

What you are suggesting is essentially using a user-level thread
package.  (Compacting a thread's state in the form of a closure is a
nice optimization, but the model essentially remains the same.)  The
main advantage to user-level thread package is that the thread memory
is pagable and is thus less likely to exhaust the sparser kernel
resources.  In the end, however, it suffers from the same problems as
the current approach.

The approach that I am taking on Viengoos is to expose an interface
that is atomic and restartable.  (This is explored by Ford et al. in
[1].)  The basic design principle that I have adopted is that object
methods should be designed such that the server can obtain all
resources it needs before making any changes to visible state.  In
this way, the server knows that once the operation starts, it will
complete atomically.  When the server fails to obtain the resources,
it frees all the resources it obtained so far and queues the message
buffer on the blocked resource.  When the blocked resource becomes
free, the message buffer is placed on the incoming queue and handled
as if it just arrived.  The result is that no intermediate state is

There are some cases where it is not easy to obtain all the required
resources upfront.  (In particular, I am thinking of activity
destruction in which all resources allocated to the activity must be
released.)  In such cases, I have tried to implement the method in
such a way that no intermediate state needs to be saved on interrupt.
(Considering again the case of activity destruction, I first mark the
activity as being dead thereby blocking the activity and then iterate
over each object and deallocating it.  If an object is not in memory,
e.g., on disk, I suspend.  Starting from the beginning is safe: we
just continue deallocating objects.)

An orthogonal concern is the use of locks.  An approach to reducing
their number is the use of lock-free data structures.  See Valois'
thesis for a starting point [2].


[1] "Interface and Execution Models in the Fluke Kernel" by Bryan
Ford, Mike Hibler, Jay Lepreau, Roland McGrath and Patrick Tullmann.

[2] ftp://ftp.cs.rpi.edu/pub/valoisj/thesis.ps.gz

reply via email to

[Prev in Thread] Current Thread [Next in Thread]