l4-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Resolved: Unexpected page fault from 0xdc003 ad address 0x??


From: Marcus Brinkmann
Subject: Re: Resolved: Unexpected page fault from 0xdc003 ad address 0x??
Date: Wed, 27 Oct 2004 14:27:09 +0200
User-agent: Wanderlust/2.10.1 (Watching The Wheels) SEMI/1.14.6 (Maruoka) FLIM/1.14.6 (Marutamachi) APEL/10.6 Emacs/21.3 (i386-pc-linux-gnu) MULE/5.0 (SAKAKI)

At Wed, 27 Oct 2004 13:21:52 +0200,
Espen Skoglund <address@hidden> wrote:
> 
> [Marcus Brinkmann]
> > Well, first: On ia32, beside superpages, only 4KB pages are
> > supported.  I actually expect that physmem and the user-space pager
> > will only deal with individual pages, not with arbitrary fpages.
> > That I use arbitrary fpages in the startup code is more or less a
> > result of me reusing the code I used to map the whole address space
> > to physmem.
> 
> I would strongly suggest to reconsider using arbitrary sized fpages.

I have never thought much about how to do paging with arbitrary page
sizes.  There are certain issues that arise, mainly that the pager
must then make decisions when to break up a page and when not to, for
example:

Consider a write fault on a 64 MB page that is mmap'ed privately from
a file (copy-on-write).  The pager now can:

1. Ask for a 64 MB copy that it can map with write access, or:

2. Break up the page and map only a part of it with write access, and map
   in the rest read-only at the next read fault.

If the pager chooses to try 1, it will probably quickly get under page
pressure and will need to break up the 64 MB anyway and swap it out
(at least partially).  It will not be able to know which parts of it
have actually been modified (the dirty bit is set per page), so it can
never go back to partially sharing the memory with the filesystem.

Doing 2 raises the question how to break it up.  If guess you could
always use the smallest possible page size, and just coalesce later if
more write faults on other neighbouring pages occur.  In the worst
case, with complete fragmentation, you are down to dealing with
individual pages of the smallest possible size, but in the common
case, you would be able to use bigger page sizes.  This form of
supporting arbitrary page sizes seems easy enough to implement, and is
certainly worthwhile.  It's almost a no-brainer, and I'd fully expect
us to do that from the start.

But actually, when I think about supporting arbitrary page sizes
intelligently, I would expect the pager to sometimes map in a larger
page writably, in particular if there is a hint that, for example, we
are going to write to the page consecutively.  To do this seems to be
a more difficult problem IMO, but also potentially more rewarding.
Some heuristics are then needed to tell the pager to which page size
it should go down in splitting up a certain larger page.

On the physical memory server side, there are also questions that
arise, in particular in which cases it is a good idea to reorganize,
coalesce and align memory that is used by users.  Only this will make
mappings in larger page sizes possible in the first place.

> If your code is able to handle arbitrary page sizes (as with fpages)
> or have a configurable list of supported page sizes then you will be
> far better off.

It's not so much about being able to handle them at all - certainly
our interfaces will support that without restriction.  The question
for me is in the _how_ to handle them intelligently.  Some minimal
support seems easy enough, though (ie, if things fall naturally into
place for a larger page size to be usable, then it can be used,
otherwise smaller page sizes will be used).  The tricky part is
massaging conditions in a way so that you benefit from larger page
sizes where they don't occur by chance.

But then, I don't even know what is already known in research and
application about memory management with arbitrary page sizes.  Maybe
some heuristics and strategies already exist.

Thanks,
Marcus





reply via email to

[Prev in Thread] Current Thread [Next in Thread]