[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [bug-gawk] Make awk data structure reside in memory
From: |
david kerns |
Subject: |
Re: [bug-gawk] Make awk data structure reside in memory |
Date: |
Fri, 12 Apr 2019 08:07:59 -0700 |
If you're running on Linux, (and you have the memory available) you might
consider using /dev/shm (or another RAM disk)
awk 'BEGIN{f="/dev/shm/fred";for(i=0;i<100000;i++)print i, rand() >
f;close(f)}' # store your DB/dataset
time awk 'BEGIN{f="/dev/shm/fred";while ((getline
l<f)>0){split(l,a);db[a[1]]=a[2]};close(f)} /* other processing here */'
re-reading from RAM disk should be significantly faster than normal disk
(assuming you have the RAM and don't start swapping)
On Fri, Apr 12, 2019 at 7:20 AM Peng Yu <address@hidden> wrote:
> Hi,
>
> I need to load a large dataset into memory before the process of some
> other small data can be done. This will be inefficient if I need to
> load the data again and again whenever I need to process different
> small data. Is there a way to make the large data in memory so that
> different awk processes can read them without having to reload them
> into the memory? Thanks.
>
> --
> Regards,
> Peng
>
>