l4-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Filesystem mechanisms in the Hurd (on L4)


From: Paul Boddie
Subject: Filesystem mechanisms in the Hurd (on L4)
Date: Thu, 06 Dec 2018 00:14:32 +0100
User-agent: KMail/4.14.1 (Linux/3.16.0-7-586; KDE/4.14.2; i686; ; )

Hello again,

I guess that not much discussion takes place here any more, and maybe I should 
be posting to the normal Hurd development list instead, but I wondered to what 
extent filesystem abstractions were considered and developed for Hurd-on-L4 
and how they compared (or were meant to compare) with the equivalent code in 
Hurd-on-Mach.

I've been looking again at L4Re on Fiasco.OC, and I think I have sketched out 
how I might provide filesystem interfaces. It should be noted that L4Re 
already has the notion of a virtual filesystem, but as far as I can tell, 
there are plenty of special cases involved and I think that each task just 
links to a library that knows how to access file-like things taking just a few 
different forms. These file abstractions are also not very rich, but they at 
least support things like dynamic library loading, with the programs and 
shared objects being loaded from the "rom" filesystem, this being an ELF 
payload.

I had a brief look at Minix 3 which has the notion of filesystem servers, and 
such abstractions seem appropriate for L4Re. The challenge then is to find 
ways of having application programs request file descriptors, access buffers 
exposing file data, perform operations on files, and for the allocated 
resources to be tidied up afterwards, even when an application exits in a 
disorderly way.

In L4Re, the dataspace concept is used to permit the sharing of memory between 
tasks. Such dataspaces are requested by tasks and the associated memory is 
then mapped to an accessible region within each task. In my filesystem 
architecture, a task obtains a dataspace and associated mapped memory and 
shares it with a filesystem server, thus enabling the server to write file 
data into this shared buffer. The act of sharing is an interprocess message or 
call from the application task to the filesystem server.

When a server receives a request to open a file, being handed a buffer and a 
filesystem path (actually provided in the as-yet unused buffer), the server 
will create a new local object to deal with subsequent operations on the file 
(and I also create a thread for this, too). It also exposes this object as a 
separate entity or endpoint, passing this new endpoint back to the application 
task. The file descriptor retains this reference or "capability" as long as 
the file is open.

With a shared buffer allocated for use by the file descriptor, operations such 
as reading, writing, seeking, and so on, are performed locally in an 
application task if the buffer can support them, with interprocess 
communication occurring when the buffer needs to reference other regions of 
the accessed file. Certain pieces of state need to be synchronised between 
client and server such as the limits of available data within the buffer and 
the position of such data within the entire file.

Ultimately, files need to be closed even when an application task goes away 
without requesting their closure. In L4Re, this is done by allowing the 
filesystem server to monitor the deletion of references to the endpoints it 
has created. When an application task closes a file or just quits, it will 
throw away its reference (capability) to the file's own server object. With 
this reference eliminated, the reference count maintained by the kernel will 
decrease to zero, and an interrupt condition can be generated to notify the 
server task. Upon receiving such a condition, the server may then delete the 
file-specific local object and clean up the shared buffer still held by the 
server.

I actually don't know how close this is to the way the Hurd does such things, 
nor how it relates to monolithic kernel architectures. But I did wonder 
whether the approach sounds reasonable and which pitfalls I may have ignored. 
I actually intend to try and get this arrangement working with a proper 
filesystem as opposed to the "toy" filesystem I employ for testing, and 
suggestions for approachable filesystem candidates would be welcome.

Thanks to anyone who might have any constructive feedback about this!

Paul



reply via email to

[Prev in Thread] Current Thread [Next in Thread]