bug-gawk
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [bug-gawk] Memory leak


From: Andrew J. Schorr
Subject: Re: [bug-gawk] Memory leak
Date: Wed, 29 Mar 2017 14:42:33 -0400
User-agent: Mutt/1.5.21 (2010-09-15)

Hi,

On Wed, Mar 29, 2017 at 04:35:34PM +0000, Stephane Delsert wrote:
> The two other reports are the results of the process of 1MM and 2MM of 
> records with the additional messages.

Hmmm. This is strange. If you load over a million records, how many are going
into the tab_store array? The files you attached show that only 800 NODEs are
created. There should be a minimum of 1 NODE for each record loaded. If you ran
this properly, it seems to indicate that the input file contains mostly
duplicate records that are getting filtered out; is that correct?  If that's
the case, then it would suggest that we are not leaking NODE or BUCKET items
but something else.  I do see that the number of blocks allocated went up
linearly (from 59,366 to 118,397), as well as the number in use at exit, so
perhaps we are losing some other type of malloced objects. In that case, it
seems clear that valgrind believes that the leaked blocks are still reachable,
so the valgrind output is not super helpful in that regard (the "possibly lost"
value for the big case shows only 16 blocks lost). When you ran on 20 MM
records, did it still allocate only 800 NODE objects?

The code in array.c:assoc_list seems to take care to free the instructions
that it allocates to run the user function, but I suppose the leak could
be elsewhere.

You might need to run valgrind with --leak-check=full --show-reachable=yes to
get to the bottom of this. I don't see any obvious leaks when I run that on the
344-record file that you sent.

Regards,
Andy



reply via email to

[Prev in Thread] Current Thread [Next in Thread]