help-octave
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: memory exhausted when reading 129M file


From: Przemek Klosowski
Subject: Re: memory exhausted when reading 129M file
Date: Mon, 13 Aug 2012 17:45:30 -0400
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:13.0) Gecko/20120615 Thunderbird/13.0.1

On 08/13/2012 04:25 PM, Zheng, Xin (NIH) [C] wrote:
fid=fopen('filename')
data=textscan(fid, '%s%f%s%f%s%f%s%f%s%f%s%f%s%f%s%f%s%f%s%f%s%f');

I haven't checked your example for larger data. It seems that since your file is 120MB, and the example data you provided runs at about 190 characters per line, you should have about 7 million numbers, which should take about 55 MB of memory. Is that about right?

If so, then it's the textscan implementation that somehow uses more memory than it needs to. Could you try simpler formats, e.g.

data=textscan(fid, "%s%f")

and try preallocating the data array

data={zeros(7e6,1),zeros(7e6,1)}


reply via email to

[Prev in Thread] Current Thread [Next in Thread]