octave-bug-tracker
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Octave-bug-tracker] [bug #30468] dlmread - performance problems


From: Alexander Renz
Subject: [Octave-bug-tracker] [bug #30468] dlmread - performance problems
Date: Sat, 17 Jul 2010 09:53:50 +0000
User-agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; de; rv:1.9.2.6) Gecko/20100625 Firefox/3.6.6

URL:
  <http://savannah.gnu.org/bugs/?30468>

                 Summary: dlmread - performance problems 
                 Project: GNU Octave
            Submitted by: veloci
            Submitted on: Sa 17 Jul 2010 09:53:49 GMT
                Category: None
                Severity: 3 - Normal
              Item Group: None
                  Status: None
             Assigned to: None
         Originator Name: 
        Originator Email: 
             Open/Closed: Open
         Discussion Lock: Any
                 Release: 3.2.4
        Operating System: Microsoft Windows

    _______________________________________________________

Details:

The dlmread function seems to have a performace lack, if used for analysing
large data files (number of rows > 100000). The processing time increases
exponetially, compared to the time needed for lesser data sets. 
Comparison: 
In my case i've used dlmread for data files with a fixed number of header
rows followed by >300000 rows/ 5 columns separated by tabs. From these i've
read only columns 2-5 starting after the header rows and continuing to the end
of the file. For that i calculate a range vector in advance and allocate
memory in a zero matrix (data = zeros(datarange)). The process is started
with

data = dlmread(filename,'\t',r); 

I have tried this approach in Octave and Matlab. In Matlab i get results
after a couple of moments for the whole data set. In Octave the process is
started and doesn't seem to end. 
I suspected having a problem with dlmread, so i tried to change the
parameters- separator, range vector. The result was, that dlmread of Octave
works fast for smaller data sets.
Now i'm using a crutch for being able to read the big data sets: I process
the data in smaller parts and combine it piece by piece in the preallocated
data matrix. 

(reference:
http://octave.1599824.n4.nabble.com/reading-data-from-ascii-files-td2291167.html#a2291167)





    _______________________________________________________

Reply to this item at:

  <http://savannah.gnu.org/bugs/?30468>

_______________________________________________
  Nachricht geschickt von/durch Savannah
  http://savannah.gnu.org/




reply via email to

[Prev in Thread] Current Thread [Next in Thread]