[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gcl-devel] address@hidden: Re: -mcmodel=large

From: Matt Kaufmann
Subject: Re: [Gcl-devel] address@hidden: Re: -mcmodel=large
Date: Mon, 23 Sep 2013 10:08:07 -0500

Hi, Camm --

Well done!  I ran several ACL2 regressions, as shown below, and they
all passed, including one for ACL2(h)!  I'm really happy about this;
thank you so much for staying with it.

Now that everything is working, perhaps you'd be interested in the
following times reported for various regressions (all successful).
Comments are below.  You might find some of these to be slightly
disappointing, but at this point I'm just thrilled that the
regressions completed without error.

ACL2 built on non-ANSI GCL 2.6.10pre:
40678.982u 1013.967s 1:45:06.91 661.0%  0+0k 153400+6615176io 9pf+0w

ACL2 built on ANSI GCL 2.6.10pre:
67950.466u 1050.753s 2:44:58.65 697.0%  0+0k 327160+6955712io 7pf+0w

ACL2 built on Clozure Common Lisp (CCL):
27651.872u 495.818s 1:13:38.33 637.0%   0+0k 20552+1892520io 0pf+0w

ACL2 built on non-ANSI GCL 2.6.8 (as of 9/11/2013):
33235.713u 1275.991s 1:29:23.91 643.4%  0+0k 156624+4638504io 25pf+0w

ACL2(h) built on ANSI GCL 2.6.10pre:
66102.819u 2103.539s 2:45:35.36 686.5%  0+0k 430272+30767880io 1325pf+0w

These all processed the same input files (in ACL2 lingo: certified the
same books), except that the last processed extra input files.  (Full
disclosure: the files processed by the first of these was not
identical to the others, but I'm confident that the differences were

I don't have a clue why the ANSI version of 2.6.10pre is slower than
the non-ANSI version (and I realize that you might).  Also, it's a bit
surprising that the times for ANSI GCL 2.6.10pre -- ACL2 and ACL2(h)
-- are so close; since the ACL2(h) regression processes several extra
input files not in the ACL2 regression, I'd normally expect the
ACL2(h) regression to take somewhat longer.  For example, the
following web page (which is for ACL2 6.2 and hence is a few months
old) shows that for CCL there was an increase from about 57 minutes
for ACL2 to about 67 minutes for ACL2(h).


(In fact I wondered if the first ANSI time stats line above came from
an ACL2(h) run by mistake, but I checked carefully.)  The page above
also shows that GCL 2.6.8pre was a bit slower than CCL (perhaps about
8%); the gap has widened somewhat (see stats above), but GCL 2.6.8 is
still competitive with CCL.  (I don't know if the widening is due to
improvements in CCL or slowdown in GCL 2.6.8.)

Clearly 2.6.8 (non-ANSI) is faster than 2.6.10pre (non-ANSI).  You had
wondered about this, I think, but till now I wasn't able to complete a
recent 2.6.8 regression.

Thanks again; this is really great.

-- Matt
   From: Camm Maguire <address@hidden>
   Date: Sun, 22 Sep 2013 09:37:26 -0400


   OK, current Version_2_6_10pre does the regression-fresh without error
   when preceded by

   (si::allocate 'contiguous 15000 t)
   (si::allocate-sgc 'contiguous 15000 100000 10)

   If you have a git repository checked out to this version, you can cd
   there and do a 'git pull' instead of having to clone the repository all
   over again.  'git status' will tell you what version is checked out.

   I am suggesting leaving the hole size and log-maxpage-bound alone.

   The default acl2 build without the extra contiguous allocation uses
   about 12000 pages. (apparently all that compiled code in TMP1)  This is
   sufficient for all books except out friend elementary-bounders, where
   the last load occurs over the 2Gb limit.  One can probably do with a
   smaller allocation (but greater than the default number as reported by
   room), but 15000 works.

   Please let me know what you think, or if this requirement makes things
   'too fiddly'.  As I said earlier, appending "-mcmodel=large" to
   compiler::*cc* with no allocation also works at ut, but has the problem
   that it does not yet compile gcl itself.

   Meanwhile, all autobuilds look fine but ia64 at the moment, so we're
   getting close to release, hopefully a few days.

   Take care,

   Matt Kaufmann <address@hidden> writes:

   > Hi, Camm --
   > I'm happy to "hold off" -- but I'm not sure what activity I'm
   > postponing.  Are you referring to my offer to run a new regression,
   > made within the last hour (below)?  If so, there were a few questions
   > in there to which I'd first like the answer.
   >   Hi, Camm --
   >   Thanks for the progress.
   >   I think you're suggesting that I keep the changes in
   >   /projects/acl2/camm-09-19/from-matt-09-19/acl2-init.lisp (as compared
   >   to /projects/acl2/camm-09-19/acl2-init.lisp), so that ACL2 no longer
   >   messes with the hole size, but that I do NOT keep the change in
   >   /projects/acl2/camm-09-19/from-matt-09-19/acl2.lisp (as compared to
   >   /projects/acl2/camm-09-19/acl2.lisp), which was calling
   >   si::set-log-maxpage-bound.  Do I have that right?  Have you run a
   >   successful regression at UT CS with those changes?  If not, I don't
   >   mind doing so for you, using an updated GCL.  Should I obtain/install
   >   GCL as you suggested before, like this?
   >   git clone git://git.sv.gnu.org/gcl.git
   >   cd gcl
   >   git checkout Version_2_6_10pre
   >   cd gcl
   >   ./configure && make 
   >   Thanks --
   >   -- Matt
   > Thanks --
   > -- Matt
   >    From: Camm Maguire <address@hidden>
   >    Date: Sat, 21 Sep 2013 08:59:28 -0400
   >    Greetings!  Just a quick note that while my regression built atop gcl
   >    with --enable-debug completed without failure, an issue with C
   >    optimization turned on has just presented itself, so if you want to hold
   >    off for a little, I'll let you know when this is resolved.
   >    Take care,
   >    -- 
   >    Camm Maguire                                        address@hidden
   >    "The earth is but one country, and mankind its citizens."  --  

   Camm Maguire                                     address@hidden
   "The earth is but one country, and mankind its citizens."  --  Baha'u'llah

reply via email to

[Prev in Thread] Current Thread [Next in Thread]