l4-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Task server implementation and integration among the other core serv


From: Marcus Brinkmann
Subject: Re: Task server implementation and integration among the other core servers
Date: Mon, 21 Mar 2005 20:41:40 +0100
User-agent: Wanderlust/2.10.1 (Watching The Wheels) SEMI/1.14.6 (Maruoka) FLIM/1.14.6 (Marutamachi) APEL/10.6 Emacs/21.3 (i386-pc-linux-gnu) MULE/5.0 (SAKAKI)

At Mon, 21 Mar 2005 19:33:57 +0000,
Matthieu Lemerre <address@hidden> wrote:
> In that case, you just call the task_threads_create RPC, ask it to
> create a new task with 0 threads in it. That's what the
> task_create_empty wrapper does.

Ah, so I was just making wrong assumptions based on the interface name.
 
> (To quote myself: I wrote some wrappers for common operations (empty
> task container creation, thread allocation ...)) Marcus, you're not
> paying attention to what I say :))

Ouch, you got me ;)

> But I know pretty nothing about thread migration or orthogonal
> persistence, how that can be useful etc...

Neither do I, but I know what is asked from us, and this would be to
give the user total control over the thread-to-utcb-addr mapping.

> > 1) Have a single global lock for all task group manipulation.
> >
> I thought about this. But every task creation or deletion is a task
> group manipulation, so we would have many locking operations that are
> maybe not required. Maybe I could just have a global lock just for the
> task_group_terminate RPC (the main issue).

Task creation/deletion doesn't need to be particularly fast.  Ok, it's
in the fork path, and fork performance is important.  But still, I
doubt that many forks happen in parallel often on any system.  Taking
an uncontested lock is a pretty fast operation, so maybe one global
lock is enough.

Of course, such things are hard to figure out just by thinking about
them.  But sometimes the global lock is faster than fine locking.

> > 2) Have a lock for each task group, acquire the group lock for group
> >manipulation. Then lock tasks individually. (Be careful about locking
> >order).
> 
> One noticeable fact: you cannot deadlock with insertion of an element
> in the list, only with deletion. So, maybe a lock for deletion
> operation. I remember something bothered me with that solution, but I
> can't remember what was that :). But sounds a good idea now.

The problem may be that if you try to get rid of all tasks in a group,
and there is a "forkbomb" that tries to constantly add tasks to the
group (and those tasks add themself to the group etc), you may never
finish if you don't block out insertions.

At least that's one of the things you have to get right ;) One way to
do it would be to first invalidate all capabilities that these tasks
hold, so they are not able to make any RPCs anymore.  (Not sure I like
this solution, though, it's a bit on the cheesy side).

> > 3) Define a locking hierarchy, for example based on the task ID.
> >Sort the locks you need by the hierarchy.
> 
> I tried to introduce a locking hierarchy by using a flat linked list
> instead of a circular one, but this was problematic because when
> destroying the first task, you would have to lock everytask to change
> the pointer to the first one. So it was worse :). I did not thought
> about using task ID as the locking hierarchy.

Here is another idea: Use a special lock just for all list
manipulations.  List pointer manipulations are very fast, so this
could even be a spin lock.  If you look in the Hurd code, there are
sometimes special global spin locks just for reference counters or
list pointers, to avoid such problems.

> I although thought of a releasing both lock, then waiting for a
> condition variable and trying again. Not a very pretty solution.

Way too heavy.
 
> So, I estimated that not acquiring the lock would happen quite
> unlikely, just asking the client to do it again was sufficient.

Well, I don't like that very much :)

Thanks,
Marcus





reply via email to

[Prev in Thread] Current Thread [Next in Thread]