l4-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: some other memory considerations.


From: Marcus Brinkmann
Subject: Re: some other memory considerations.
Date: Tue, 26 Oct 2004 21:25:02 +0200
User-agent: Wanderlust/2.10.1 (Watching The Wheels) SEMI/1.14.6 (Maruoka) FLIM/1.14.6 (Marutamachi) APEL/10.6 Emacs/21.3 (i386-pc-linux-gnu) MULE/5.0 (SAKAKI)

At Tue, 26 Oct 2004 12:13:56 -0400,
Rian Hunter <address@hidden> wrote:
> Ah, I see. Even if it was desired, how does a L4 task handle faults on 
> pages that were mapped by another arbitrary task, when it faults to its 
> pager?

If pages are mapped, you don't fault, because there is a mapping.
You'd only fault if the mapping is removed (ie, the mapper or some
other task from which the memory is mapped in the first place unmaps
it).  In this case, a normal page fault is generated to the pager thread.

> I'm assuming that the pagers are notified (either by the kernel 
> or in some user protocol) whenever maps are granted to its client, or 
> something like that.

No.  If you need to manage the mapping in any way, then it is entirely
up to you.  Ie, your RPC stub generator could register the mapping, if
there is any use (ie, if it can be reestablished in case it goes
away).  There is no L4-way to inspect the page tables of a task or to
get notified of changes.

In the Hurd, it will work the other way: You install mappings by
registering them with the pager, and the pager will care about
actually performing the mapping.  As memory is only ever mapped
through the pager, it always knows what happens in the task,
mapping-wise.

> Anyway, I guess I see how mapping operations will work: through physmem. 
> Which does make loads of sense. But the HURD is bound to see lots and 
> lots of page operations and optimizations, so physmem will have to be 
> fiercely multi-threaded or fiercely well coded. Although I think this is 
> a concern for when physmem is actually mature and supporting a full system.

Right on all accounts.

Thanks,
Marcus





reply via email to

[Prev in Thread] Current Thread [Next in Thread]