[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Gcl-devel] Re: sgc
[Gcl-devel] Re: sgc
Fri, 1 Oct 2004 09:59:49 -0500
Thank you for the explanation!
To answer your question on this:
>> > /projects/acl2/v2-9/logs/make-regression-gcl-no-sgc-sept18.log:
>> > 8145.710u 178.200s 2:24:34.97 95.9% 0+0k 0+0io 13199596pf+0w
>> > /projects/acl2/v2-9/logs/make-regression-gcl-normal-sept13.log
>> > 7850.740u 174.860s 2:32:52.67 87.4% 0+0k 0+0io 12412712pf+0w
>> Thank you for this. Is the first timing roughly equal to the older
>> gcl timings just before the significant performance gain with recent
No, the first timing is still much faster than the old gcl timings. For
example, a regression suite timing on ACL2 2.8 (which probably is reasonably
comparable for timing purposes to the "2.9 beta" timings above) built on
10993.450u 98.050s 3:10:31.19 97.0% 0+0k 0+0io 9539832pf+0w
Cc: address@hidden, address@hidden
From: Camm Maguire <address@hidden>
Date: 01 Oct 2004 10:53:07 -0400
User-Agent: Gnus/5.09 (Gnus v5.9.0) Emacs/21.2
Content-Type: text/plain; charset=us-ascii
X-SpamAssassin-Status: No, hits=-2.6 required=5.0
X-UTCS-Spam-Status: No, hits=-332 required=180
Matt Kaufmann <address@hidden> writes:
> Bob, Camm --
> This is just a FYI, in case you're interested.
> Awhile back, the idea seems to have arisen to leave SGC off in some cases.
> include excerpts from some emails below.) I just thought I'd mention that
> ACL2 experiment suggests it may be best to leave SGC on for ACL2. I think
> source code was _very_ similar for the two regression runs, and it's been a
> long time since I've seen a total user time over 7900, but I'll rerun with
> identical source code if you care enough.
> 8145.710u 178.200s 2:24:34.97 95.9% 0+0k 0+0io 13199596pf+0w
> 7850.740u 174.860s 2:32:52.67 87.4% 0+0k 0+0io 12412712pf+0w
Thank you for this. Is the first timing roughly equal to the older
gcl timings just before the significant performance gain with recent
> Here are the relevant excerpts from Bob's July 17 email.
> > SGC is better when a large product is 'finalized' for use by the user
> > will make minimal allocations by comparison to the system as a whole.
As we can see, the issue is predicting what the likely use will be in
the future, i.e. whether the statement above will hold or not.
Before *optimize-maximum-pages*, there was no attempt in GCL to make
such predictions on the basis of run-time gathered statistics. As you
already know, SGC tries to limit the size of the 'effective heap',
making each gc call faster, by marking large sections of memory
read-only and static. Some pages of each type are allocated
read-write for the working set (subject to GC). The user can set
these amounts with 'allocate-sgc, but the defaults are (by now old)
hardwired constants, presumably tuned by hand to an earlier version of
acl2 by Dr. Schelter. *optimize-maximum-pages* will now adjust these
too with a little runtime, which in experience seems rather quick, to
attempt to minimize *the number of gc calls* times the heap size ~
total gc time. These two items appear to be working well together at
present (i.e. are no longer interfering pathologically).
So the crux of the sgc/no-sgc issue now appears to hinge on the
tradeoff between a smaller effective heap, and the overhead sgc
incurs when the hole is overrun. If one never overruns the hole, sgc
should always be a win, though a small one if one is writing to the
whole heap anyway. If one overruns the hole repeatedly, sgc will
always lose, at least slightly.
Needless to say, this situation can be improved in the future, time
permitting, in at least two ways -- one can collect statistics on the
hole overrun overhead and disable/enable sgc as deemed optimal. More
significantly, per-page write statistics could be collected to detect
the completion of major heap operations, and turn-on sgc using exactly
the pages still being actively written. All of this is akin to making
GCL's gc a bit more generational than it already is.
In general, there is a notable advantage to being compact -- Matt
found as much when he decided to turn off *optimize-maximum-pages*
until the final save. But it is typically a much smaller gain in
today's memory environment than the penalty incurred in spinning gc
operations in a small space.
I appreciate any insights any of you may have as your work proceeds on
strategies for GCL improvements.
> I agree. And there are also cases when SGC is better for a finalized
> in which the user will make huge allocations. The Nqthm or ACL2 systems
> general in a way that Lisp is itself: the user is able to define an
> number of definitions and prove an unlimited number of theorems about the
> concepts defined; those proofs may involve huge allocations. Sgc used
> of enormous value for the Nqthm user, because sgc so reduced paging when
> available ram was 1mb or so. sgc's value today, at least on the Nqthm
> examples we have, seems greatly reduced now, even nonexistent, since
> examples can be done entirely in ram on a typical workstation. Today I'd
> recommend to someone using Nqthm or ACL2 today to leave sgc off unless
> starts to see serious swapping, giving the very fine advance that
> optimize-maximum-pages provides over the miserly gc allocation strategy
> GCL used to have. The value of sgc may well return for Nqthm or ACL2 or
> similar systems with the advent of 64 bit machines and Lisp images
> considerably in excess of the available ram. For example, the fm9001
> microprocessor, verified in the Nqthm example fm9001-replay.events, is
> less than 1% the size of a contemporary commercial microprocessor. A
> for a contemporary processor might involve dozens of gigabytes of conses.
> -- Matt
Camm Maguire address@hidden
"The earth is but one country, and mankind its citizens." -- Baha'u'llah