bug-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: The Hurd: what is it?


From: Marcus Brinkmann
Subject: Re: The Hurd: what is it?
Date: Wed, 09 Nov 2005 20:32:42 +0100
User-agent: Wanderlust/2.14.0 (Africa) SEMI/1.14.6 (Maruoka) FLIM/1.14.7 (Sanjō) APEL/10.6 Emacs/21.4 (i386-pc-linux-gnu) MULE/5.0 (SAKAKI)

At Wed, 9 Nov 2005 19:07:50 +0100,
Sergio Lopez <koro@sinrega.org> wrote:
> On the other hand, I don't think that the IBM case is applicable for us,
> since their objetives are far different from ours.

You think so?  Surely, they had a strong priority on multiple
personalities, while we only have one personality.  Supporting
multiple personalities in the Hurd is advertised as one of the
advantages, though, and Workplace raises questions about that claim.

However, note that there are a number of results in the paper which
have nothing to do with multiple personalities, but everything with a
multi-server design on top of Mach.  Take for example the performance
numbers for the file-server intensive tasks.  Workplace OS/2 ran 3
times slower than OS/2.  And this is _after_ they made IPC synchronous
and improved its performance 6-10 fold.

The Liedtke papers shed a light on why this is so.  The problem is not
purely the IPC performance.  A big issue is cache consumption: If the
kernel's working set is bigger than the cache, there is a strange
effect: Performance degrades as you increase the cache size!  This is
because the kernel is eating the cache, and it needs to be refilled.
This is why microkernels must be small, as in actual number of bytes.

> > For problems with the Hurd passive translator design:
> > 
> > http://lists.gnu.org/archive/html/l4-hurd/2005-10/msg00081.html
> > 
> > Active translators also must be considered harmful:
> > 
> > http://lists.gnu.org/archive/html/hurd-devel/2003-10/msg00002.html
> > 
> > (There is not a complete explanation of all the possible problems, but
> > there are many examples, please use this as an opportunity for an
> > exercise---check out how Linux FUSE does it).
> > 
> 
> This is a interesting thing to discuss about. Do you really think that this
> can't be solved from our current implementation?

The passive translator problem may not be worth paying any attention
to, _unless_ you also want to fix a number of other issues, like
resource accountability and flexible security.  But these do require
larger architectural changes anyway.

The active translator problem seems serious to me.  Without any
guarantee about the implementation of a service, you can not know what
it does.  This means that you must be prepared for any malicious
behaviour, including: no response (stalling the client), infinite
virtual directory tree, confusing inode numbers and link counts,
rapidly changing filesystem structure (to trigger race conditions) etc
etc.

This is why in FUSE, users don't see the user filesystems of other
users.  I am afraid that given the seriousness of the problem, this is
the only sane option.  Only with a broader semantic framework can you
re-enable sharing on a case by case basis.

Talking only about "safe" translators now: There is also the general
question that _if_ you want to make all applications translator aware
(good luck ;), which policy the programs should use with which
translators?  Ie, which translators should they follow and which not?
How do they even know what translator is running on a node?

The only thing that seems feasible at all is to have the translator
advice the applications by means of a "follow me" stat bit.  This is a
bit inflexible, but at least it calls for a consistent policy across
all applications.

> > If you are looking at the actual Hurd implementation, you can find
> > plenty of denial of service attack possibilities, as well as denial of
> > resource attack possibilites.  No need to even try to enumerate them
> > all.  Note the absence of a quota system.  Note the absence of
> > feasibility to implement a quota system in such a multi-server system,
> > without some ground-breaking architectural changes.
> >
> 
> Mach knows about almost every resource allocation that the servers do, so I
> don't think that will be extremly hard to solve this without completly 
> breaking our current design.

Mach doesn't know on whose behalf the server does the allocation.
  
> > Just as an illustration: The number of worker threads in the system
> > per server is unlimited.  It's not unusual for ext2fs to create 2000
> > threads on page pressure, because Mach swaps them out individually.
> > You can throttle, but not limit the number, because of the possibility
> > for deadlocks.  There are many things wrong with that, starting from
> > the simple fact that the number of threads in a server should be a
> > function of the server design, and not of the number of users or
> > system load.
> 
> This issue will be completly solved by implementing Migrating Threads on
> GNU Mach, which is on the way. Other work, like partly reducing IPC
> semantics or copying by temporary mapping can also be done to put the 
> performance to a reasonable value.

Well, there is no deep architectural flaw in this example, so
implementing migrating threads isn't even necessary to fix it.  It's
just one of the many things that require attention.  You did ask for a
_complete_ analysis, didn't you? :)

Everybody agrees that synchronous message delivery is important to get
decent IPC performance.  It would be interesting to move the Hurd on
Mach to a synchronous IPC design.  There are a couple of places where
the Hurd relies on asynchronous delivery (and reply!!!), but those are
rare.  I am not sure about Mach, though (external memory objects,
notifications!?).  So, there may be some unexpected problems, and you
might have to make a couple of compromises, but it may be feasible.

I am not sure if you really want completely passive objects.  But I
haven't thought about it much.  Maybe I am misunderstanding something.
In the Hurd, objects are active.  Do you plan to change that?

In L4, a thread will donate its current timeslice to the receiving
thread at IPC.  But at the next preemption the server thread will only
be scheduled if, well, it is scheduled.  Not when the client thread is
scheduled.  There is no priority inheritance for the whole operation,
AFAIK.

Thanks,
Marcus





reply via email to

[Prev in Thread] Current Thread [Next in Thread]