gnewsense-dev
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gnewsense-dev] The Linux kernel, no longer as monolithic as it once


From: Kim Hawtin
Subject: Re: [Gnewsense-dev] The Linux kernel, no longer as monolithic as it once was?
Date: Wed, 04 Aug 2010 15:12:11 +0930
User-agent: Thunderbird 2.0.0.24 (X11/20100317)

Karl Goetz wrote:
On Sun, 01 Aug 2010 23:43:52 +0300
Niklas Cholmkvist <address@hidden> wrote:
by reading the wikipedia article about Udev the device manager which
"executes entirely in user space" and also reading somewhere else that
it is used instead of "devfs' kernel space" , does it mean that the
kernel linux is no longer as "monolithic" as it once was?

Andy Tanenbaum[0] said so [1], but who'd listen to him about kernels? ;)

[0] http://en.wikipedia.org/wiki/Andrew_S._Tanenbaum
[1] 200MB
http://mirror.linux.org.au/pub/linux.conf.au/2007/video/wednesday/Wednesday_0900_Tanenbaum.ogg

Andrew was fishing for postgraduate students ;)
He is the leading academic authority in micro kernel research.
Has some great ideas. No really! Some cool stuff.

Equally important to applications and network services.
Perhaps more so in the server space.
Monolithic services have the same kind of issues as monolithic kernels; see Apache...

Just a curious question on how to name things.

Back in the day, say late 80's and early 90's there wasn't much in the way of dynamic libraries and kernel modules were unheard of.

Much work was done at DEC and Sun on dynamic libraries, mainly because they were being implemented on Linux. This, from memory, was primarily brought about from a change in CPU architectures. Makes life easier if you don't have to relink the whole kernel just to add a new device. Turn around from customers getting kit order, in place and working needed to get shorter! Remember this was the era of three months to ship and empty box with vendors such as DEC and Sun. IBM was worse.

Yes, you had to relink the kernel! No you didn't have the source either!
From the large list of .o files you statically linked together all the drivers for all the devices in your system. Adding a new disk? Edit the config file, relink, reboot!

Obviously this is inconvenient, but you didn't have a lot of choice if you wanted the support contract honored. Often the Sun field engineer did this for you when they installed the new disk.

Wind on down the road to about 1993, Linux took as long to compile on a 486DX33 with 8MB RAM as it took to link the kernel on a Sun 4/490 with 64MB. Linux was monolithic, everything was linked into the one kernel and you removed drivers from the kernel config to use less RAM! Adding a NIC? Then uncomment the driver, recompile, install and reboot. Not so different to a real UNIX system really. In early 1993 this took about 25-35 minutes depending on kernel configs on my 486DX33 with 8MB RAM and 120MB 5400rpm IDE disk. Kernel version was 0.99pl9.

Roll on the change from 1.3.133 to 2.0 (or was that 2.0->2.2?) and we look at a new subsystem in the Linux kernel, loadable modules. You could now load a module for a driver for a device. This also went hand in hand with dynamic libraries and you only used RAM if you needed to, reduced boot times, etc, etc...

Now to put things in perspective, Windows NT was the new architecture! It followed a set of ideals that were first dreamed up in research labs with guys on tiny research budgets and crappy hardware ;)

NT was going to be an all singing, all dancing, micro kernel and it would only load into RAM the modules it actually needed.

How did that go in the end? NT could swap out its disk driver ;)
Anyhow the video driver was moved to live ring 0.

What does that mean? Blue screen central.
Why? Who writes random video card drivers?
Exploits? Many!

So after all this time, NT went monolithic and Linux is getting closer to what the micro kernel was meant to deliver, to a degree.

What does this all mean in the big scheme of things? Not a lot, although it pays to learn from history. Things change slowly over time. Research groups do interesting projects from scratch and build up a great set of theories, we apply as much as we can of them. Some work in the real world, some don't.

Monolithic kernels are a compromise of speed and memory usage over security.

Micro kernels are a compromise of compartmentalization over speed.

Is Linux a mirco kernel? No.

At least not on a micro scale, perhaps on a macro scale it is now days.
However it has load-able and unload-able kernel modules and dynamic libraries. Very handy, although mostly transparent, thankfully.

Which is the right tool for the job? um ... you need to evaluate that.

What does compartmentalization give you?
 Security to an extent.
 Replaceable modules.
 Smaller memory foot print.
 Semaphores and locking hell.
 Process distribution across non symmetrical CPUs and hosts(clustering).
 Makes message passing a more desirable model.

The most important thing to take away from the monolithic vs micro model is that its about breaking the tasks up into smaller and smaller components. CPUs that support multi threading, see Suns Coolthreads, are very hand in this space. If you have a single large long running process, then threads are not going to help much.

regards,

Kim



reply via email to

[Prev in Thread] Current Thread [Next in Thread]