[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: How to make GNU Guile more successful

From: Freja Nordsiek
Subject: Re: How to make GNU Guile more successful
Date: Sun, 16 Jul 2017 10:11:57 +0000

I checked the implementation of bytecectors and SRFI-4 in Guile and they are 
definitely not scanned for pointers. But I would say hacking them is not a good 
general solution for this problem. They are good and natural data structures 
for large arrays of numerical data that are standard signed/unsigned integers 
of various fixed sizes and IEEE floating point numbers, or structures/unions of 
these types. Using them for things other than that or strings/byte-arrays could 
be error prone, messy, and performance poor.

Freja Nordsiek

On July 16, 2017 11:18:18 AM GMT+02:00, Marko Rauhamaa <address@hidden> wrote:
>Freja Nordsiek <address@hidden>:
>> If I was to hazard a reason for why Guile gets very slow when loading
>> 20 GB or more (may or may not be related to it being buggy and
>> crashy), my guesses would be a lot of the data when loaded into Guile
>> was allocated such that the GC scans it for pointers (using
>> scm_gc_malloc instead of scm_gc_malloc_pointerless) which would
>> increase the amount of memory the GC needs to scan every time it
>Good point!
>If you didn't to any C programming, what kind of native Guile data
>structures are good for such large random-access storage? At least
>arrays haven't specifically been documented for such GC optimization:
>Maybe bytevectors would do: <URL:
>Of course constantly encoding to and decoding from a bytevector using
>scheme code might be very slow without the help of some binary bulk
>formatting facilities for the data records.

reply via email to

[Prev in Thread] Current Thread [Next in Thread]