[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
compgen is slow for large numbers of options
From: |
Richard Neill |
Subject: |
compgen is slow for large numbers of options |
Date: |
Wed, 14 Mar 2012 17:44:32 +0000 |
User-agent: |
Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.2.27) Gecko/20120216 Lightning/1.0b2 Thunderbird/3.1.19 |
Dear All,
I don't know for certain if this is a bug per se, but I think
"compgen -W" is much slower than it "should" be in the case of a large
(10000+) number of options.
For example (on a fast i7 2700 CPU), I measure:
compgen -W "`seq 1 50000`" 1794 #3.83 s
compgen -W "`seq 1 50000 | grep 1794`" #0.019 s
In these examples, I'm using `seq` as a trivial way to generate some
data, and picking 1794 as a totally arbitrary match.
In the first example, compgen is doing the filtering, whereas in the
2nd, I obtain the same result very much faster with grep.
If I increase the upper number by a factor of 10, to 500000, these times
become, 436 s (yes, really, 7 minutes!) and 0.20 s respectively. This
suggests that the algorithm used by compgen is O(n^2) whereas the
algorithm used by grep is 0(1).
For a real world example, see:
https://bugs.mageia.org/show_bug.cgi?id=373#c8
in which we are using completion on package-management.
In this case, the number is 43031.
I hope this is helpful.
Richard
- compgen is slow for large numbers of options,
Richard Neill <=