[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: increase stacksize above 64MB?

From: Bowie Owens
Subject: Re: increase stacksize above 64MB?
Date: Wed, 23 Jan 2002 08:47:32 +1100
User-agent: Mutt/1.2.5i

On Tue, Jan 22, 2002 at 04:04:45PM +0100, Johannes Middeke wrote:
> hi,
> i am using a FD program to calculate some really big amounts of data. but
> i always get a stack overflow (although cstr and trail are both at 64MB!).
> and when i try to allocate more memory (the maschine i'm using has about
> 500MB + swap) i get a memory allocation fault :(

> is there any way to get a greater stack size? or can CLP program be
> optimized in a simple(!) way (i'm new to this way of programming)?

Check the limits on your process' datasize. If there is a soft limit on
your datasize gprolog will be unable to allocate more memory than that,
even if it is available. You can adjust the limits from your shell using
the ulimit builtin under bash and unlimit/limit under tcsh.
It is possible to work with stacks much larger than 64Mb.

As for reducing memory consumption, the first thing to look for is
unwanted non-determinism. Eliminate any unecessary branch-points, it
should probably be the case that setting up your problem and constraints
is purely deterministic.  The other thing to remember is constraint
representation is very important.  Constraints have complexity kind of
like the way algorithms have time and space complexity. You can often find
more than one way to represent a constraint, different representations
will use up different amounts of memory and will propagate more or
less effectively.

-- Bowie Owens

CSIRO Mathematical & Information Sciences
phone : +61 3 9545 8055
email : address@hidden

reply via email to

[Prev in Thread] Current Thread [Next in Thread]