bug-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Review of Thomas's >2GB ext2fs proposal


From: Neal H. Walfield
Subject: Re: Review of Thomas's >2GB ext2fs proposal
Date: Mon, 16 Aug 2004 16:08:23 -0400
User-agent: Wanderlust/2.8.1 (Something) SEMI/1.14.3 (Ushinoya) FLIM/1.14.3 (Unebigoryƍmae) APEL/10.6 Emacs/21.2 (i386-debian-linux-gnu) MULE/5.0 (SAKAKI)

After rereading the email another three times, I am fairly convinced
that you think my proposal somehow changes the kernel, this is just
not the case.  (And should respond to questions 1 and 3.)

> 2) Mach won't use more memory under my proposal; you have simply moved
>    around where the memory is.  Instead of taking more memory to hold
>    a table of physical->virtual address mappings in the memory object,
>    you have a table of virtual->backing_store mappings in the pager;
>    but it is the same data and it has to be stored either way.

Well, you have X mappings in the address space where X is a function
of the mapping cache size and the region size.  This is X
vm_map_entry_t (if I understood the internals correctly and I have not
really studied them throughly).  I have one vm_map_entry_t.  We both
need two hashes in ext2fs itself to track the mappings and reverse
mappings.  So, your proposal has redundant information in the kernel.
My thought was that if we let the cache grow to about 2GB with
vm_page_size regions we have a lot of vm_map_entry_t structures and
that will just suck up kernel memory to no advantage.  Now, it is true
that this analysis relies upon an understanding of the microkernel,
however, my proposal is completely portable within the Mach 3
framework.

> 4) And finally, what about data caching--which is vastly more
>    important than mapping caching?  My version has it that data is
>    cached as long as the kernel wants to keep it around, and in a
>    fashion decoupled from mappings.

I evict when the kernel evicts.  This keeps the accounting data
proportional to the data in core.

Thanks,
Neal




reply via email to

[Prev in Thread] Current Thread [Next in Thread]