gcl-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Gcl-devel] Re: Large GCL Configurations


From: Camm Maguire
Subject: [Gcl-devel] Re: Large GCL Configurations
Date: 11 Apr 2004 08:47:09 -0400
User-agent: Gnus/5.09 (Gnus v5.9.0) Emacs/21.2

Greetings!  

"Warren A. Hunt Jr." <address@hidden> writes:

> Hi Camm,
> 
> Again, thanks for all of your hard work on GCL.  It's constant utility
> to me is hard for me to even dimension.
> 

Thanks for this feedback.  

> Bob Boyer and I are working with large hash tables and very large tree
> structures.  I am interested in a (GCL) Lisp image that would permit
> 1,000,000,000 or more CONS cells and a number of hash tables each with
> millions of entries.  For a 32-bit implementation, we can only get
> about 75,000,000 CONS cells before we can't allocate (see below).  I
> understand that if a CONS will require 20 or 24 bytes, so I will have
> to buy a machine with 24+ GBytes of memory.  This fall we are
> expecting delivery of an 8-way Itanium box with 32 GBytes of memory.
> We also can get access to an IBM Power 4 Regatta system with 200+
> GBytes of memory.

Sounds like an interesting project.  GCL is ready to go for Itanium,
and already supports maxima,acl2 and axiom there for Debiam.  In fact,
I've just tested a few large mem options on the Itanium to which I
have access, and have discovered that a few variables internal to GCL
need redefining from int to long for a case this big, but that
everything else works as expected up to the point where we try to
allocate the initial allotment, which is ~ 1/10 MAXPAGES, and which
fails on this machine as it has too little actual memory.  In short,
other than a few minor adjustments like these, there is nothing that I
can see which would limit the memory addressable in the GCL heap other
than the physical memory limitations on the machine.  

Please know that on Itanium, GCL builds its images a bit differently
-- the function (compiler::link ) is used in places where one would
traditionally load a compiled object and dump the image with
save-system.  While fully functional, we would like to get the
load/save of compiled lisp object supported universally at some point
due to its simplicity.

I've never had access to a Power, but I think it is like powerpc, in
which case GCL should be ready with possibly a very few minor
modifications. 

I'd also like to mention an option, in case your problem is
parallelizable with a coarse grain, that GCL can interface with MPI
implementations to run across a cluster, giving expanded memory access
this way as well.

> 
> Here are my questions.  Do you know what the largest memory image that
> GCL can manage?  Do you know the configuration?  Could you put me in
> touch with someone you know that uses large configurations?

I'll post your request to the list, but at present I know of no one
using GCL in this very large size range.

A few other comments below.


> 
> Thanks,
> 
> Warren
> ++++++
> 
> I executed the following function
> 
>   (defun count-cons (inc cnt)
>     (let ((lst (make-list inc))
>         (cnt (1+ cnt)))
>       (progn (format t "Number of distinct CONS elements:  ~12D." (* inc cnt))
>            (terpri)
>            (cons lst (count-cons inc cnt)))))
> 
> with the call
> 
>   (count-cons 1000000 0)
> 
> on two Lisp implementations.  Using the biggest version of GCL that I
> can build, I got:
> 
> 
>   ...
>   Number of distinct CONS elements:      76000000.
>   Number of distinct CONS elements:      77000000.
> 
>   Unrecoverable error: Can't allocate.  Good-bye!.
>   Abort
> 
> And, using OpenMCL I got:
> 
>   Number of distinct CONS elements:     120000000.
>   Number of distinct CONS elements:     121000000.
>   Bug in MCL-PPC system code:
>   Error reporting error
>   Continue/Debugger/eXit <enter>?
>   X
>   [Warren-Hunts-Computer:~/f/acl2/ccl] warren% 
> 
> It appears that both implementations "crash" about the same place, and
> that is consistent with the memory limitations imposed by the Linux
> and MacOS X operating systems.  Certainly in Linux (and I think
> similarily in MacOS X), the heap is allowed to grow upward until
> addresses reach one GByte.  At that point, a call to raise "sbrk" will
> fail.  The reason that the OpenMCL can allocate more CONS elements is
> that an OpenMCL CONS element only requires 8 bytes, whereas a GCL CONS
> element requires 12 bytes.
> 

Yes, of course for your job, a 32-bit machine is completely impossible
on either system (You can BTW address up to 3 or maybe 4 Gb on a 32bit
Linux box, but you need 8 at minimum).

GCL uses 3 machine words per cons at present, meaning 12 bytes on
32bit, and 24 bytes on 64bit.  We can reduce this to 2 words at some
point at the cost of considerable complexity in the code, but we would
still need extra space at GC time to build a mark table for the cons
space -- it need not be as large as one word per cons, but it will be
something.  The way I can envisage this happening is to offset cons
pointers by one bit and then replace every object indirection by a
correcting '&= 0xffffffffe'.  Needless to say, this is quite complex
to get right, and the benefit would be something less than a 50% gain
in cons space.  Still would likely be useful, and should be done at
some point, but might not be the highest priority, especially given
the ansi compliance level of GCl at present.

Take care,

> 
> 

-- 
Camm Maguire                                            address@hidden
==========================================================================
"The earth is but one country, and mankind its citizens."  --  Baha'u'llah




reply via email to

[Prev in Thread] Current Thread [Next in Thread]