In writing a function that uses lib_file_io to load the content of an entire file into an array of strings, I came across a bad performance problem that I am having trouble narrowing down.
The first version did not do blocked reads and resized the array after each row was read. That was terribly slow, so I preallocate a block of 1000 elements, and resize every 1000 lines, giving the version you can see linked above.
I was testing with a text file containing almost 14000 rows, and on my laptop it takes many minutes to load the file. One would expect that the time taken to load such a small file should not take any noticeable time at all.
One interesting aspect of this is that it takes longer and longer to load each row as the loading proceeds. I have no explanation as to why that is the case. It's not the resizing that takes time, I was measuring the time taken to load a block of rows excluding the array resize.
Any ideas?
Regards,
Elias