[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Revisited: Hurd on a cluster computer

From: Mark Morgan Lloyd
Subject: Revisited: Hurd on a cluster computer
Date: Sat, 20 May 2017 12:18:01 +0000
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.8.0

Please excuse my raising my head above the parapet, most of the time I'm a lurker but I prefer the idea of a robustly-partitioned system and I think the industry-wide events of the last week or so reinforce that.

>> [Brent said] The payoff is a supercomputer operating system that
>> presents an entire cluster as a single POSIX system with hundreds
>> of processors, terabytes of RAM, and petabytes of disk space.

> [Richard replied] Most attempts in the past have failed. It seems
> better to build specialiazed cluster computers on top of local
> operating systems. Look for "single system image" on a search engine
> for projects with this goal.

Looking at this from an historical POV, I think there's another approach which is process migration without an SSI. Specifically, I'd highlight MOSIX and derivatives.

Unlike e.g. Amoeba which I believe worked by locating a system with spare capacity and using that for a newly-started process, MOSIX worked by starting a new process on the local computer, then migrating it to another system retaining local stubs for communication with the kernel. A process could be moved multiple times to track spare capacity, but it continued to talk to its original kernel via the stubs.

I had it running something like 15 years ago, and found that it was robust enough that if a collaborating system were booted during e.g. a kernel compilation then work would immediately migrate onto it, with no user involvement at all. It was, obviously, sensitive to systems dying without due warning: there was no checkpointing or program restart but even as it stood it was one of the more impressive things I've seen in the industry.

I believe it was originally research with anticipated commercial spinoff. It was open sourced as OpenMOSIX, and later renamed to LinuxPMI (Process Migration Infrastructure). I put enough time into it a couple of years ago to determine which versions of Linux kernel and compiler it was compatible with (roughly speaking, Debian "Lenny") and I don't think anybody has done much more: more than anything else Linux is too much of a moving target for something which has fallen behind to ever catch up, and its monolithic architecture probably doesn't help.

Apart from that, known problems were that it relied on kernel extensions written in assembler, it had no negotiation to ensure that collaborating systems were binary-compatible, it had no authentication or capabilities tracking, and it's probably not friendly to applications which use shared memory for their IPC.

Finally, what is the Hurd portability situation? Way back I worked on a microkernel in '386 protected mode that used segmentation heavily, am I correct in assuming that that sort of thing is completely deprecated in the interest of portability?

When I were a lad we used logic analysers to debug our code...

Mark Morgan Lloyd
markMLl .AT. telemetry.co .DOT. uk

[Opinions above are the author's, not those of his employers or colleagues]

reply via email to

[Prev in Thread] Current Thread [Next in Thread]