[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Huge execution time in 4.2, WAS: for; do; done regression ?

From: Chet Ramey
Subject: Re: Huge execution time in 4.2, WAS: for; do; done regression ?
Date: Fri, 07 Jan 2011 09:48:24 -0500
User-agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.5; en-US; rv: Gecko/20101207 Thunderbird/3.1.7

On 1/7/11 5:09 AM, Jan Schampera wrote:
> Alexander Tiurin wrote:
>> ~$ time for i in `seq 0 10000`  ; do echo /o/23/4 | cut -d'/' -f2 ; done
>> > /dev/null 
> To track this a bit, I ran the exact command several times in a Bash 3.2,
> seeing increasing execution times (40s up to ~2min), as reported.
> I knew there were several bugs about filedescriptors and leaks fixed since
> then, so I tested it in 4.2 beta. The first run took about 27 minutes(!),
> the second run still goes on.
> I can't imagine this is just some debugging code still active (it's a beta).

Imagine.  Anything that doesn't have a version tag of `release' has DEBUG
enabled for the preprocessor, which enables MALLOC_DEBUG.  If you're using
the bash malloc, MALLOC_DEBUG turns on extensive memory checking and
allocation tracing.  All active allocations are kept in a hash table with
8K entries, and when you fill up that hash table, each new allocation has
to search the entire table before throwing away an old entry.  That quickly
degenerates.  This can be fixed, but it hasn't become a priority yet.


``The lyf so short, the craft so long to lerne.'' - Chaucer
                 ``Ars longa, vita brevis'' - Hippocrates
Chet Ramey, ITS, CWRU    address@hidden    http://cnswww.cns.cwru.edu/~chet/

reply via email to

[Prev in Thread] Current Thread [Next in Thread]