bug-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] ext2fs and large stores (> 1.5G)


From: Neal H. Walfield
Subject: Re: [PATCH] ext2fs and large stores (> 1.5G)
Date: 03 May 2003 16:23:38 -0400
User-agent: Gnus/5.0808 (Gnus v5.8.8) Emacs/21.2

> Using some kind of locking and unlocking (request/release) of floating
> metablocks (indirect blocks) is crucial.  This guarantees that cached
> block will not be replaced by another one.

Of course, I never meant to imply anything else.

> Fortunately there are only
> three places where indirect blocks are ever referred:
> getblk.c:block_getblk, getblk.c:ext2_alloc_block (to zero indirect
> block) and truncate.c:trunc_indirect.  All other places can freely use
> b{ptr,offs}* because they deal with metablocks at fixed location that
> won't be replaced suddenly with something else.
> 
> Here is the solution[1] to the "record_global_poke" problem: the
> .dirty field is replaced by .dirty_count field.  Now a block in the
> cache is never reused while "use_count || dirty_count".  This forces
> us to use the same amount of disk_cache_clear(block) calls (old name
> was disk_image_clear) as disk_cache_release(block,1).  One problem
> arises: when indirect block is made many times dirty, this will
> increment dirty_count many times, but _pokel_exec will decrement
> dirty_count only once.  To solve it, I made pokel_add to return
> boolean value that indicates whether the passed memory region is
> already in the pokel.  If it is, then disk_cache_clear is called.

I do not like your caching approach very much: it is relatively heavy
weight and is in tension with Mach.  Consider: Mach already implements
a paging scheme which worries about evicting pages: when there is
memory pressure, it chooses a page that it thinks will not be used in
the near future and returns it to the pager (eventually via
pager_write_page), which flushes it to backing store.  When Mach gives
us this hint, we should take as much advantage of it as possible,
e.g. by invalidating the mapping.  In your scheme, not only do we not
take advantage of it, but we are also working against Mach.

By using Mach's eviction of a page as an indication that the page will
not be used in the near future, we eliminate most of the current
accounting machinery: Mach's cache of our meta data should nearly
always be smaller than the size of our mapping area for it.  In the
rare case when the entire meta data paging area is full, we can evict
pages on our own using a relatively inefficient scheme to make room.
This case really should be very rare arising only when Mach has a huge
physical cache of the meta data and therefore not effect performance
in a negative way!

Another advantage of exploiting Mach's eviction scheme is that we are
no longer in tension with it: in your current solution, we force Mach
to remove a page from its physical cache when we eliminate a mapping
(using pager_flush_some).  By doing this, we assert that we know
better than Mach which pages will and will not be used in the near
future (i.e. we are taking the responsibility of page eviction away
from Mach).  Do we really know that much better?  I am dubious.

It seems a bit strange to leave mappings when Mach evicts a page:
relative to flushing a block to disk or faulting it back into memory,
removing and establishing a mapping is very cheap, thus why leave it
taking up memory?  By keeping the cache in sync with the number of
active mappings (which should generally be relatively small), we can
more easily migrate towards a hash based solution in lieu of the
current mapping scheme, i.e. huge arrays: the density of the mapping
table will be much smaller over time as its size only reflects the
number of pages Mach's physical cache.

A side note: the entire mapping area can now be floating--not just the
indirect area (except for the super block and the group_desc_image,
however, each of these should not be more than a block or two).

In conclusion: let's use Mach's eviction scheme as much as possible.
This means only setting up mappings on demand (i.e. nothing up front
except for the super block and the group descriptor image); less
accounting on our part; and invalidating mappings when pages are
flushed to backing store in pager_write_page.





reply via email to

[Prev in Thread] Current Thread [Next in Thread]