[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: libpager

From: Justus Winter
Subject: Re: libpager
Date: Sun, 06 Aug 2017 18:40:02 +0200

"Brent W. Baccala" <cosine@freesoft.org> writes:

> Hi -
> I've learned some interesting things about libpager that I'd like to share
> with the list.

Cool :)

> I've found two bugs in the existing code that were "fixed" by the new
> demuxer that sequentially services requests on each pager object.
> For example, the data return code sets a flag PAGINGOUT in the pagemap
> before it starts calling pager_write_page (which could be slow; writing to
> a hard drive, say).  Future data returns check the PAGINGOUT flag and wait
> on a condition variable if it's set.  The problem is that if multiple
> threads start waiting on that, pthreads doesn't guarantee what order they
> will run in when the conditional variable is signaled, so the data writes
> can get reordered.  If three data returns come in 1, 2, 3, (maybe because
> pager_sync is called three times), number 1 starts writing, but if it
> doesn't finish quick enough, 2 and 3 can get reordered.
> Except that they can't.  The new demux code queues the second and third
> writes.  They don't process until the first one is done.  The pager object
> is essentially locked until the pager_write_page() completes.
> I went so far as to write a test case to exercise the bug!  Just good
> coding practice - develop tests for your known bugs first.  Then I ran it,
> and it couldn't reproduce the bug!  Only after thinking about the code more
> did I understand why.
> I know the demuxer code was rewritten to avoid thread storms,
> [...]
>  Does anybody remember what characterized the thread
> storms?  What conditions triggered them?  What kind of pager operations
> were being done?

Yes.  I rewrote the demuxing part of libpager.  There were some issues
besides the thread storms (see below), I don't remember all the details,
but libpager was multithreaded but ordered the requests using the
sequence numbers, adding huge overhead (sleeping threads).

The thread storm issue was specific to ext2fs.  The issue here was that
there are two kinds of pagers, the disk pager and the file pager.  Now
one paging request to the file pager resulted in at least one request to
the disk pager, sometimes more (e.g. for superblock updates).  Combined
with threads sleeping b/c of the seqno ordering, this lead to a huge
number of threads being created, and if one tried to use a bounded
number of threads, this lead to deadlocks.  The key insight was to use
separate thread pools for each kind of pager, currently with one worker
thread each, but that is configurable.

> but it's obviously got some issues

What kind of issues?

> and could become a performance bottleneck at some point.  There's no
> good reason to block all access to page 100 while a disk operation
> completes on page 1.

Let's wait until it actually becomes a bottleneck...

> I'm not looking to re-write it right now, but I'm curious.

Are you aware of the pager rework in [0]?  I'm considering to review and
merge these changes.  It may make sense to try to come up with a common
API.  What is the state of your library, is it a drop-in replacement?

0: https://git.sceen.net/hurd/hurd.git/log/?h=mplaneta/gsoc12/review_v1


Attachment: signature.asc
Description: PGP signature

reply via email to

[Prev in Thread] Current Thread [Next in Thread]