[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Gcl-devel] Re: gcl_signal

From: Mike Thomas
Subject: RE: [Gcl-devel] Re: gcl_signal
Date: Tue, 29 Jun 2004 11:05:39 +1000

Hi Camm.

| > In HEAD I increased the declaration size to 32 to cover the max
| number of signals.  I also added a definition of
| > INSTALL_SEGMENTATION_CATCHER to mingw.h to address the problem
| of an immediate exit from GCL whenever a seg fault
| > occurred on Windows.  This allows examination of the Lisp
| environment from the Lisp debugger after a seg fault, so for
| > example, the gcc 3.3.3 crash at specfn.o load in the Maxima
| build is reported as appended below.  The down side is that
| > it stops the problem from dumping out to the C debugger - ie
| for that you need to rebuild without the handler.
| >
| Is this perhaps due to the way mingw 'fakes' unix signals?

For clarity - it is actually the Microsoft Visual C runtime library
(MSVCRT.DLL, a part of all modern Windows systems) which does this.  MinGW32
just hooks up to that library with a thin covering layer and a few minor
additions - hence it's name "Minimalist GCC for Windows 32".  Whether that
layer affects the discussion below I don't yet know.

Unfortunately I don't know how the MSVCRT implements signals - they are not
part of the OS as such.  If we were doing the job properly, we would at
least consider replacing the entire GCL signal system on Windows but I
haven't thought of a good way to do this as I don't really understand the
signal system to start with.  As an example of the kind of issues involved,




which lists the kind of functions which it is safe to call in a signal
handler - it seems not very many at all.

Add to this the fact that set/longjump doesn't preserve register variables
(I believe that most modern compilers ignore the register modifier anyway)
and it is hard to work out what is best, and more importantly, what is

| My gdb
| will first stop at the sigsegv, then proceed to the handler when I
| continue.  In any case, I think you can break at error and still get a
| stack trace across the signal handler to the location of the fault.

I'll try this at some stage.  I was referring to running GCL without the
debugger.  When GCL intercepts the segmentation "signal", it doe not fall
through to the OS default debugger.

| Just wondering if this is helping in any of the known problem areas.

No, yes and no.

I went through the signal code in the hope of finding a miracle cure and
failed, but on the other hand it has provided a neater way to get at the
Lisp side point of failure than print debugging, but no, it has not yet
helped my attempts to sort out an Axiom problem in similar vein.

I am chasing the gcc 3.3.3/3.4.0 Maxima specfn crash which I have so far
traced through from the C side as far as the execution of _init_specfn in
fasload() and on a much earlier occasion in the array initialisation code,
and the LISP side using both the above mentioned Lisp debugger back trace
and Lisp style print debugging.  The question is why the array
initialisation causes a segfault.

My next steps will be to look at the C code being generated and also
possibly to produce a minimal lisp file which can be loaded without the
entire Maxima infrastructure coming in - I got stuck on how to get *ARRAY
defined on the weekend.

I also intend tracing further with the debugger the array "cursor"
initialisation which I have found to be particularly difficult to understand
during previous attempts.


Mike Thomas.

reply via email to

[Prev in Thread] Current Thread [Next in Thread]