[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: VM crashes on big INBOX file ==> problem found!!!

From: Baoqiu Cui
Subject: Re: VM crashes on big INBOX file ==> problem found!!!
Date: Thu, 20 Dec 2007 14:21:04 -0800

Sorry for the late reply.  I wanted to look deeper into the VM code but
could not find time to do that.  I guess it might be easier for the new
maintainer of VM, Robert Widhopf-Fenk, to take a look at this problem
and see if there is anything that can be changed in VM.  I am copying
this email to Robert...

Thanks for looking into this problem, Richard!

- Baoqiu

Richard Stallman writes:
 >     garbage collection is triggered and function mark_object() (in alloc.c)
 >     is recursively called about 29,885 times (see the backtrace info
 >     below)!!!  So many levels of mark_object() calls make the stack
 >     overflow, causing a segmentation fault.
 > I wonder if this indicates a bad choice of data structures.
 > mark_object calls itself recursively for many kinds of pointers,
 > but it is supposed to loop rather than recurse for cdr pointers.
 > This is so that long lists do not cause recursion.
 > Most data structures don't have a tremendous amount of nesting
 > in the car direction.
 > I wonder what sort of data structures made 29,885 recursive
 > calls necessary.  Perhaps we should change those data structures
 > or else change the garbage collector so it recurses less.
 > For instance, maybe a lot of this recursion goes thru symbols.  If so,
 > here is an idea.  Suppose that when mark_object finds a symbol which
 > is not yet marked, and is interned in the main obarray, it sets a bit
 > "needs to be marked" in the symbol.  Then increment a counter
 > which records how many symbols are in this state.
 > Then gc_sweep could end by scanning the main obarray over and over,
 > marking those symbols, until the counter goes to zero.
 > Depending on the nature of the problem, this might or might not
 > help much.  If the main data types involved are others,
 > a different solution might help.

reply via email to

[Prev in Thread] Current Thread [Next in Thread]