[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: The Hurd: what is it?

From: Sergio Lopez
Subject: Re: The Hurd: what is it?
Date: Wed, 9 Nov 2005 19:07:50 +0100
User-agent: Mutt/1.4.1i

On Wed, Nov 09, 2005 at 04:53:13PM +0100, Marcus Brinkmann wrote:
> At Wed, 09 Nov 2005 14:15:30 +0100,
> Sergio Lopez <koro@sinrega.org> wrote:
> > I've searched many times through the mailing lists, and I didn't found
> > a complete and rational discussion about the design issues of Mach/Hurd.
> > Perhaps could be a good idea to start such discussion now, probably
> > both l4-hurd and hurd will get benefit from this.
> > 
> > If you feel like this is not the right time for that, could you please
> > point me to that technical documentation? (That would be very helpful
> > for me :-)
> You want a _complete_ discussion?  Man, you are brave :)
> For defects in Mach, try:
> http://srl.cs.jhu.edu/courses/600.439/ford94evolving.pdf
> http://srl.cs.jhu.edu/courses/600.439/impact-os-mem.pdf
> http://www.l4ka.org/publications/1996/ukernels-must-be-small.pdf
> For hints on how a better kernel can look like:
> http://srl.cs.jhu.edu/courses/600.439/ukernel-construction.pdf
> http://srl.cs.jhu.edu/courses/600.439/liedtke93improving.pdf
> For problems with multiple personalities on top of Mach:
> http://srl.cs.jhu.edu/courses/600.439/failure-to-generalize.pdf
> http://srl.cs.jhu.edu/courses/600.439/ExperienceMicrokernelBasedOS.pdf
> Note that this work (IBM Workplace) already includes the work done by
> Liedtke as far as it can be applied to Mach with only few
> architectural changes.  This sheds some light on the prospect of
> incremental improvements.
> For problems with Mach's external pager interface, for example:
> http://citeseer.ist.psu.edu/hand99selfpaging.html

Thanks, I've already read most of them, but I didn't know about the
selfpaging one. Indeed, we're currently working with the first one,
"Evolving Mach to a Migrating Thread model". 

On the other hand, I don't think that the IBM case is applicable for us,
since their objetives are far different from ours.

> For problems with the Hurd passive translator design:
> http://lists.gnu.org/archive/html/l4-hurd/2005-10/msg00081.html
> Active translators also must be considered harmful:
> http://lists.gnu.org/archive/html/hurd-devel/2003-10/msg00002.html
> (There is not a complete explanation of all the possible problems, but
> there are many examples, please use this as an opportunity for an
> exercise---check out how Linux FUSE does it).

This is a interesting thing to discuss about. Do you really think that this
can't be solved from our current implementation?

> If you are looking at the actual Hurd implementation, you can find
> plenty of denial of service attack possibilities, as well as denial of
> resource attack possibilites.  No need to even try to enumerate them
> all.  Note the absence of a quota system.  Note the absence of
> feasibility to implement a quota system in such a multi-server system,
> without some ground-breaking architectural changes.

Mach knows about almost every resource allocation that the servers do, so I
don't think that will be extremly hard to solve this without completly 
breaking our current design.
> Just as an illustration: The number of worker threads in the system
> per server is unlimited.  It's not unusual for ext2fs to create 2000
> threads on page pressure, because Mach swaps them out individually.
> You can throttle, but not limit the number, because of the possibility
> for deadlocks.  There are many things wrong with that, starting from
> the simple fact that the number of threads in a server should be a
> function of the server design, and not of the number of users or
> system load.

This issue will be completly solved by implementing Migrating Threads on
GNU Mach, which is on the way. Other work, like partly reducing IPC
semantics or copying by temporary mapping can also be done to put the 
performance to a reasonable value.


Sergio Lopez

reply via email to

[Prev in Thread] Current Thread [Next in Thread]