[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] Elpa: Pinpoint semantics of `seq-subseq' for streams

From: Clément Pit--Claudel
Subject: Re: [PATCH] Elpa: Pinpoint semantics of `seq-subseq' for streams
Date: Wed, 14 Sep 2016 22:00:19 -0400
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.2.0

On 2016-09-14 20:51, John Mastro wrote:
> Clément Pit--Claudel <address@hidden> wrote:
>>> I think this is actually a very good example why it is good to
>>> forbid negative indexes.  If you are interested in the last n lines
>>> of a file, why would you dissect the complete file (or buffer) into
>>> lines and throw away nearly all of the result?
>> Because it's much more memory-efficient, as long as the file's lines
>> are short :) Note that I was careful to say file, not buffer: I don't
>> need to load a full file in memory before I start processing its
>> lines. Same for the output of a running process: if I just want the
>> last n lines, then accumulating all of the output before going to the
>> end and looking backwards is extremely inefficient, memory-wise.
>> Dissecting the output (splitting it on newlines) and using a ring
>> buffer to keep only the last `n` ones is much better.
> (Asking for my own edification)

:) Keep in mind that I could be making a mistake, too :)

> Wouldn't finding the last N elements require forcing every thunk in the
> stream (to find its end), thus using more memory than a linked list with
> the same contents?

That's correct, if by more memory you mean "more memory allocated over the 
lifetime of the process".  The key is that we don't need to allocate it all at 
once. In the simple case where we want, for example, just the last element, we 
only need to hold on to one value at a time.

> As long as you don't "hang on to the head" of the
> stream, earlier elements could be reclaimed by GC,

Exactly :)

> but the same applies to a list.

Not exactly: the list needs to be fully built before you iterate over it.  
That's when the memory problems occur.
So yes, during iteration you can discard the cons cells that you've already 
seen in both cases, but in the list case these cells need to all be constructed 
and kept in memory beforehand.

> In short, I find this conversation interesting, but don't quite
> understand where the memory savings come in :)

Let me try to summarize it in a different way.  In the stream case, you build 
one cons cell at a time, and every time you build a new cons cell the previous 
one is available for garbage collection.  With a good GC, there's only a few 
cells physically present in memory at any time (plus the memory it takes to 
keep the last "n" elements, if you're desired output is the n-elements tail of 
the stream).

In the list case, on the other hand, the full list exists in memory before you 
iterate on it.  Sure, after you iterate on it, the list can be garbage 
collected; but before you iterate on it, all the cons cells need to exist at 
the same time.

Here's a concrete bit of code to demo this (I tried to write this in Emacs 
Lisp, but Emacs kept segfaulting on me, so I gave up and wrote it in Python):

    import sys

    def mkstream(n):
        for k in range(n):
            yield "a" * (k % 25) * 10

    def mklist(n):
        return ["a" * (k % 25) * 10 for k in range(n)]

    def last(seq):
        lastx = None
        for x in seq:
            lastx = x
        return lastx

    def test_list(n):

    def test_stream(n):

    tests = {"stream": test_stream, "list": test_list}


When I run this on my machine as “python stream.py stream”, the total memory 
usage doesn't noticeably change.  When I run it as “python stream.py list”, 
python allocates about 25GB of RAM.  This obviously not exactly the same as 
what would happen on the Emacs Lisp side, but hopefully it's close enough :)


Attachment: signature.asc
Description: OpenPGP digital signature

reply via email to

[Prev in Thread] Current Thread [Next in Thread]