[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: processing a large buffer contents to a hash table
From: |
Xah Lee |
Subject: |
Re: processing a large buffer contents to a hash table |
Date: |
Fri, 9 Jan 2009 09:47:06 -0800 (PST) |
User-agent: |
G2/1.0 |
On Jan 9, 8:56 am, Seweryn Kokot <sewko...@gmail.com> wrote:
> Hello,
>
> I'm trying to write a function in elisp which returns word frequency of
> a buffer content . It works but for a large file
> (around 250 000 words) it takes 15 seconds, while a similar function in python
> takes 4s.
>
> Here is the function which process a buffer word by word and write word
> frequency to a hash table.
>
> (defun word-frequency-process-buffer ()
> (interactive)
> (let ((buffer (current-buffer)) bounds beg end word)
> (save-excursion
> (goto-char (point-min))
> (while (re-search-forward "\\<[[:word:]]+\\>" nil t)
> ;; (while (forward-word 1)
> (setq bounds (bounds-of-thing-at-point 'word))
> (setq beg (car bounds))
> (setq end (cdr bounds))
> (setq word (downcase (buffer-substring-no-properties beg
> end)))
> (word-frequency-incr word)
> ))))
>
> The main function is word-frequency which operates on the current buffer
> and gives *frequencies* with word statistics.
>
> Any idea how to optimize `word-frequency-process-buffer' function?
>
you doing many unnecessary things.
you don't need save-excursion.
don't need setting the bunch of boundary.
you can use just match-string to capture the word and feed it to word-
frequency-incr.
...
Xah
∑ http://xahlee.org/
☄