l4-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Booting


From: Bas Wijnen
Subject: Re: Booting
Date: Sat, 08 Jan 2005 20:41:57 +0100
User-agent: Mozilla Thunderbird 0.9 (X11/20041124)

Marcus Brinkmann wrote:
At Fri, 31 Dec 2004 00:43:13 +0100,
Bas Wijnen <address@hidden> wrote:

Currently my system boots until task. I still haven't tried Marcus' deva things. Anyway, I was wondering what more is needed to boot the system.

I would expect that after task, a ramdisk can be used as rootfs, from which deva can be started.


The idea is that the module after deva is the deva initial driver
archive, which is a bit like linux's initrd.  It would contain all
drivers needed before the root filesystem is up.

This suggests that a root filesystem cannot run without deva. I understand that this is true for a root filesystem which needs to access hardware such as a harddisk, but a ramdisk should run fine without deva, so it should be no problem to launch it directly after task. It should also be the last thing that is started by wortel. This ramdisk can then start deva and rootfs (it is itself the rootfs for all deva processes, but obviously there should be a different rootfs for the rest).

I always planned to use a fake driver in there which would take its
filesystem data from the initial driver archive itself.  OTOH, a
simple IDE driver doesn't seem to complicated after all, so that is
also a possibility.


I know the idea is to use a working deva for starting rootfs, but that would put limits on the deva boot (with a root fs, it can consist of normal processes, as they have everything they need: physmem, task, and rootfs). Since a ramdisk wouldn't need any device drivers, I don't see a problem with it.


I don't understand the beginning of the sentence.

I'll try to explain.

The root filesystem
server would be started by wortel, and is passed as the last module in
the grub configuration.

That's what I propose to change. The ramdisk should be used instead of deva, as the last module. deva is started by the ramdisk, not by wortel. The rootfs is also started by the ramdisk. And I think it should start the first process as well, there is no reason to let rootfs do that. So in fact it is a bit more than just a ramdisk, it also starts some tasks, although technically that part could still be done by wortel. The only essential part of it is the driver archive, because it is used as the root filesystem for deva.

One good thing about not using wortel is that the bootstrap code doesn't stay in memory when booting has completed. In fact, it might also be possible to use it for starting task (but physmem is more tricky, I guess, as it cannot give wortel's memory to it).

The driver archive for deva should only
contain deva drivers, which are hurd-independent.

Ah, I didn't make this distinction, I expected them to be in a directory on the ramdisk. So still separated from other things, but not not in their own module.

The root filesystem
is a hurdish server task and thus outside of the deva archive.  But it
was my plan to make it use a deva ramdisk driver.

Maybe we mean different things with ramdisk.

What I mean with a ramdisk is a process which can be used as a root filesystem for an other process, and which does not need hardware drivers. While I am thinking of this as a traditional ramdisk, it can be anything as long as it doesn't need deva to run. :-)

Well, I see one problem: the ramdisk is only used for bootstrapping, and should be disposed of when the system is running. So it would be useful to have a "change root fs" method at least for deva and the first device drivers (particularly harddisk and the "real" rootfs).


The module after deva is the deva driver archive, which contains
initial drivers that may be needed to get at the real root filesystem
(ie, /dev/hda5 or whatever).  The filesystem server would be
configured to access a store (-T device:hda5 for example, or -T
part:5:device:hda) and communicate with deva to get at the raw storage
data.

I don't see what this has to do with what I wrote, you probably didn't understand what I was saying (which I guess is my fault). I was just addressing a minor (and currently quite unimportant) problem, assuming that we want to free memory used for booting when booting has completed.

My idea was to initially have a deva driver which uses a file in the
driver archive as storage data, which would be loaded into memory.
But an IDE driver is also possible.

An IDE driver would be nice, because I don't like the idea of a lot of ramdisk contents (driver archive or other), which need to be compiled in their own way when changing anything inside it.

Am I right that "normal" processes can be started when the root fs is ready? Can libc already be compiled? (I expect every normal process to need it.)

The root filesystem server itself will be started by wortel just as
deva is, but with even more bootstrap information (the device master
port, for example).  This is a straightforward addition, and Neal
already has patches for that I believe.

Then the rootfs will be responsible for the actual hurd bootstrap,
just as we have it today: Loading and running init, auth, proc, etc.
These are normal processes, yes, but there is still some bootstrap
magic, as for _really_ normal processes you would expect auth, proc
etc to already run.  But the amount of magic is relatively small and
we already know what it is, as it is the same in the current Hurd on
Mach.

After thinking a bit more about it, I meant "processes which use libc" when I wrote "normal processes". Since libc will need physmem and task (for malloc and fork, for example), those cannot use it. It also needs a root filesystem, I suppose, although perhaps it could even work without one (although that would make using libc as a shared library impossible, and is therefore highly undesirable).

The reason I was thinking of this alternative bootup method is that I would prefer to use libc for as many processes as possible, because not using it makes it harder to write them. However, there are other things to think of, such as using the hard disk as soon as possible (so we don't need strange things like prepared ramdisks a lot, or even at all).

Then I can get back to glibc, and see that I can get the startup for
statically linked programs working.

You mean fork(), or execve()?  Or a combination?
Is the protocol (especially with the communications with task) already determined?

It's a pain in the ass, of
course, and for a real functional library there is a lot of work to be
done.  But I hope that with some simple stubs I can get it so much as
to call main() and allow incremental process.  This shouldn't be too
far away, although it won't be the real thing.

If bash works, I'd be very happy already. :-) I have the feeling that that shouldn't be too far away, but I could be wrong about that.

So, if you wanna help, there is a lot you can do, but you should wait
just a few more days for Neal's work to get in, which should be by the
weekend.  And then I will commit whatever glibc stuff I have which is
not much, but offers endless opportunities to hack. :)

Good.  :-)

Of course, you could also work on deva related stuff so we actually
have some storage to run off.  That could be done right now
independently of what we are doing - a simple implementation would use
string items for reading/writing or so, and not containers.

I was thinking of writing the ramdisk thing I described above. I only need to know the interface which it has to provide. Is it already clear how tasks will communicate with their filesystem? I guess this is in libc. Is the implementation currently used on mach usable on L4, or is it mach-specific?

Thanks,
Bas

--
I encourage people to send encrypted e-mail (see http://www.gnupg.org).
If you have problems reading my e-mail, use a better reader.
Please send the central message of e-mails as plain text
   in the message body, not as HTML and definitely not as MS Word.
Please do not use the MS Word format for attachments either.
For more information, see http://129.125.47.90/e-mail.html

Attachment: signature.asc
Description: OpenPGP digital signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]