[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Contribution to the Hurd on L4

From: Matthieu Lemerre
Subject: Contribution to the Hurd on L4
Date: Mon, 27 Dec 2004 16:21:12 +0100
User-agent: Gnus/5.1007 (Gnus v5.10.7) Emacs/21.3 (gnu/linux)

Hello people,

I'm following the Hurd development for a while now, and I'm willing to
contribute. I'd like first to be sure that I've well understood some
things, and thus I have some questions.

I understand how communication is done between the servers as follow:

-Communication with wortel is done via RPC whose stubs are declared as
 inlined function in wortel/wortel.h

-The physmem_map RPC has its stub declared in physmem-user.c in the
 servers that need it

These two are done like this because the capability system cannot be
used yet, because the task server isn't running.

Further communication between tasks would rely on RPCs based on the
capability system.

However, I didn't find where physmem respond to physmem_map RPC.

I have a few questions, to be sure that I have well understood the
code I've seen:

-What are the capabilities function in wortel for? Only finding for
 the core servers the initial capabilities for the other servers?

-Will the physmem_map RPC be replaced by the container interface, or
 will it be always needed for initial memory allocation by task and deva? 

How will you prevent a server to do a physmem_map rpc? Do you control
the task ID numerically, assuming that deva and task's taskID are
constant? (I didn't found where was the reply to that RPC yet, and I
didn't understood what the create_bootstrap_caps function in
physmem/physmem.c did...  )

-I understand that we need a stub generator which would use
the capability system to communicate. 

The stub generator would generate the manager function, which would,
on multi-threaded servers, create a worker thread executing the
function the user has to provide, conforming to the interface he has
given to the stub generator.

The stub generator would rely on hurd-cap-server for the server stub,
and on hurd-cap for the client ones.

Am I right?

-What is the root capability? It does not appear in the hurd-on-l4
 document. Is it the same as the master control capability, that I
 understand as a right to do any RPC on a server?

-If I've understood, the things that must work to get the capability
system running are, in the order:

1/The task server, necessary to aquire task info capabilities
2/Completing libhurd-cap-server to use the task server
3/Completing libhurd-cap to use the task server
4/Writing the stub code generator, although not necessary to get
communication between servers running.

Please, tell me if and where I could help.. If I'm not ready to write
code, maybe can I write some documentation..

I found some typos in the vmm.tex part of hurd-on-l4: I send you a
diff as an attachment.

Content-Type: text/x-patch
Content-Disposition: inline; filename=vmm.patch

--- vmm.bak     2004-12-19 17:50:33.000000000 +0100
+++ vmm.tex     2004-12-25 23:07:50.000000000 +0100
@@ -75,7 +75,7 @@
 kernel is the virtual memory subsystem: every component in the system
 needs memory for a variety of reasons and with different priorities.
 The system must attempt to meet a given allocation criteria.  However,
-as the kernel does not and cannot know how how a task will use its
+as the kernel does not and cannot know how a task will use its
 memory except based on the use of page fault statistics is bound to
 make sub-ideal eviction decisions.  It is in part through years of
 fine tuning that Unix is able to perform as well as it does for the
@@ -102,7 +102,7 @@
 requires a few bad decisions to destroy performance.  Thus, a new
 design can either choose to return to the monolithic design and add
 even more knowledge to the kernel to increase performance or the page
-eviction scheme can be remove from the kernel completely and placed in
+eviction scheme can be removed from the kernel completely and placed in
 user space and make all tasks self paged.
 \subsection{Following the Hurd Philosophy}
@@ -167,10 +167,10 @@
 process or it can steal memory from the first process and send it to
 backing store.
-One way to solve these problems is to have the VMM allocate phsyical
+One way to solve these problems is to have the VMM allocate physical
 memory and make applications completely self-paged.  Thus, the burden
-of paging lies the application themselves.  When application request
-memory, they no longer request virutal memory but physical memory.
+of paging lies the applications themselves.  When application request
+memory, they no longer request virtual memory but physical memory.
 Once the application has exhausted its available frames, it is its
 responsibility to multiplex the available frames.  Thus, virtual
 memory is done in the application itself.  It is important to note
@@ -274,7 +274,7 @@
 over commit the number of frames, i.e. the total number of guaranteed
 frames must never exceed the number of frames avilable for allocation.
-Until the memory policy server makes the intial contact with the
+Until the memory policy server makes the initial contact with the
 physical memory server, memory will be allocated on a first come first
 serve basis.  The memory policy server shall use the following remote
 procedure call to contact the physical memory server:
@@ -506,7 +506,7 @@
 pages from other tasks' extra frame allocations.
 The physical memory server may unmap pages at any time.  This allows
-the physical memory server to fucntionally lock the contents of the
+the physical memory server to functionally lock the contents of the
 frame and move it to a new physical frame.  As such, tasks must be
 prepared to reestablish a mapping with the physical memory server at
 anytime.  The physical memory server is not a registry of mappings: it
@@ -706,7 +706,7 @@
 leads to a further problem: a frame is really not evicted from the
 system until it is purged from all caches.  Thus if the file system
 cache is smart and chooses the better frames to evict, the
-cooresponding physical frames will not really be freed until the
+corresponding physical frames will not really be freed until the
 device driver also drops its references to the frames.  Thus, the
 effectiveness of the smarter caching algorithm is impeded by the
 device driver's caching scheme.  Double caching must be avoided.
@@ -806,7 +806,7 @@
 count, out [] swap\_ids)
-The swap server resides in (or is proxied by) the phsyical memory
+The swap server resides in (or is proxied by) the physical memory
 server.  This allows the logical copies of frames to be preserved
 across the swapped out period (i.e. logical copies are not lost when a
 frame is sent to swap).  If this was not the case, then when a number


Thanks a lot,

reply via email to

[Prev in Thread] Current Thread [Next in Thread]