[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Linus replies

From: Richard Braun
Subject: Re: Linus replies
Date: Fri, 12 May 2006 12:14:55 +0200
User-agent: Mutt/1.5.9i

On Thu, May 11, 2006 at 09:45:01PM +0200, Martin Schoenbeck wrote:
> >That's where the Hurd slightly differs from what is said here: the
> >problem about a "decision that spans more than one entity" is the
> >interface between those entities. Within a monolithic kernel, these
> >interfaces can easily change since it is internal to the kernel
> First, that is a disadvantage in my opinion, second if you have write
> access to all software implementing this with a microkernel, you can
> change the interface between them as easy as with monolithic kernels.

I agree, it is a disadvantage, but it makes things simpler to change,
since a modification on one side doesn't imply the cooperation of
someone else on the other side of the interface.

Concerning the second point, it isn't so obvious. Some interfaces are
shared between lots of clients and servers. If you change io_read,
io_write, etc... on the Hurd, you can't simply change them inside the
Hurd code itself, since almost any Hurd translator uses them. It
means that interfaces must be carefully designed, and any change that
would affect programs outside the system code (kernel+servers) has to
be efficiently handled, with a good level of communication among
developers. This is of course a good thing, but a bit more complicated
to achieve than simply changing some interfaces inside a monolithic
kernel, since internal interfaces are generally not shared with
kernel/userspace interfaces.

> >(except
> >of course system calls and interface with userland processes). With a
> >multiserver microkernel, they must be very carefully designed 
> that's a very big advantage and leads to less maintenance efforts.

I agree, but it means more effort at the beginning.

> >Microkernels are not simpler, they are
> >more complicated. 
> I don't believe that. We are maintaining the L3 with two people know
> (and of course do much other things, too). While the L3 probably
> couldn't be called a real microkernel, it contains most design
> principles of the L4. The most maintenance efforts we have to do belong
> to exactly that part of L3 where Jochen Liedtke later decided to remove
> it from the kernel for L4: the backing storage management and the memory
> management.

Creating good interfaces, like L4 memory managment system calls, is
a difficult task. You have to take into account lots of variables like
many use cases, portability, performance, etc... This kind of system
call is hard to change because it shouldn't change in a way that forces
developers of userland memory managers to redesign their software. So
I maintain that microkernels are more complicated by design. This
doesn't mean it's a wrong thing :-). As Thomas Bushnell wrote, the fact
that it is more difficult to do doesn't mean we shouldn't try to do it.

> >But they allow simpler development of new components,
> >and this is clearly true on the Hurd, since using software like the
> >well known glibc or gdb helps a *lot* when debugging new translators
> >(even if glibc and gdb sometimes have bugs too, which makes debugging
> >a little harder ;-)).
> And because most pieces of a microkernel based OS _are_ such components,
> it makes the whole development process much easier.

Agree :-).

> >Concerning stability, excluding the kernel and critical services, any
> >process can crash without crashing the system. The issue concerns
> >processes that hang, and makes related processes hang too, but there
> >are solutions to that as well.
> And that eases development, too. I did an (or a?) USB implementation for
> the L3 and I can't remember a single situation where I had to press the
> reset button to recover from that driver hanging.

I was referring to the cancellation issue discussed on this mailing list
a few weeks ago. Of course it's better to have userland processes hanging
that kernel threads, but there could be blocked threads in servers that
would remain even after cleanly killing the other faulty server. The
example I had in mind was a stack of servers, say file system servers,
with one of the lower level server hanging, and higher level servers
blocking because of that.

Richard Braun

Attachment: signature.asc
Description: Digital signature

reply via email to

[Prev in Thread] Current Thread [Next in Thread]