l4-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Task server implementation and integration among the other core serv


From: Marcus Brinkmann
Subject: Re: Task server implementation and integration among the other core servers
Date: Mon, 21 Mar 2005 13:17:44 +0100
User-agent: Wanderlust/2.10.1 (Watching The Wheels) SEMI/1.14.6 (Maruoka) FLIM/1.14.6 (Marutamachi) APEL/10.6 Emacs/21.3 (i386-pc-linux-gnu) MULE/5.0 (SAKAKI)

Hi,

just a quick first response.

At Mon, 21 Mar 2005 04:30:12 +0000,
Matthieu Lemerre <address@hidden> wrote:
> 
> [1  <text/plain (7bit)>]
> 
> Hi,
> 
> I have written an implementation of the task server, and modified the
> other servers so that they use the new interfaces. The patch is a bit
> huge, so here are the keypoints:
> 
> * The task server has three main RPCs: task_threads_create,
>   task_threads_terminate, task_terminate (the names are maybe not well
>   chosen, I tried to mimic Mach ones). task_threads_create is
>   responsible for both task and threads creation.

Didn't I suggest at some point that creating an empty task with no
threads is a good idea for passing to a filesystem for suid
invocation?  The idea was to delay the creation of the actual L4
address space (with the first thread) until the filesystem actually
uses it.  This makes revocation a no-op in the common case.

What became of that?

> * Task now decides itself on the utcb of each thread, so the utcb
>   argument of task_thread_alloc is no longer necessary. This is
>   because we have to store the utcb for a task, so we should take
>   profit of it :).  I modified wortel to provide the core servers' utcb
>   to task for their allocation.

Why do we need to store it, for task_terminate?  That's a pain :)

Still, this is wrong.  It defeats the ability to let users create
threads which are intended for migration to other address spaces.  We
do not want to use that feature, but somebody else may.  I think it's
also important for orthogonal persistence to be able to recreate a
thread at the right UTCB address.

Well, we know by now that in the next version of L4 thread IDs will
become mappable items, and tasks will be able to directly
create/destroy threads they have mapped.  So if you want we can leave
the code as is for now (we are going to rewrite it before anybody will
care about orthogonal persistence or thread migration ;)

> * Task groups are implemented using a circular singled-linked list of
>   tasks.  Thus deletion could be quite long (we have to iterate over each
>   task to find the parent), but in practice, I think that most of the
>   task groups will have 1 or 2 tasks, and that structure allows the
>   simplest algorithms I think.
> 
>   Insertion of a task in a group is not a problem, but deletion requires
>   two locks (one for the parent, one for the task to be deleted), so to
>   avoid a deadlock, I decided that if we can't acquire the two locks
>   immediatly, we just return with EAGAIN (the client has to do the
>   task_terminate RPC again). That should not happen very often, so I
>   guess that it's not a problem.
> 
>   Deletion of a whole task group require to lock every task in a group,
>   so there is a deadlock problem here. This problem is maybe reduced if
>   only the manager capability (proc) can do that operation (it just have
>   to make sure that it does not do it twice on the same task group).

I have not checked out the details, but here are a couple of ideas you
may want to think about:

1) Have a single global lock for all task group manipulation.

2) Have a lock for each task group, acquire the group lock for group
   manipulation.  Then lock tasks individually.  (Be careful about
   locking order).

3) Define a locking hierarchy, for example based on the task ID.  Sort
   the locks you need by the hierarchy.

Thanks,
Marcus





reply via email to

[Prev in Thread] Current Thread [Next in Thread]