[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Octave-bug-tracker] [bug #53899] [octave forge] (io) Octave crashes whe
[Octave-bug-tracker] [bug #53899] [octave forge] (io) Octave crashes when writing large dbf files with dbfwrite
Sun, 28 Oct 2018 10:03:46 -0400 (EDT)
Mozilla/5.0 (Windows NT 6.1; WOW64; rv:51.0) Gecko/20100101 Firefox/51.0 SeaMonkey/2.48
Update of bug #53899 (project octave):
Priority: 3 - Low => 4
Follow-up Comment #8:
I've pushed your patch after a few edits here
<http://hg.code.sf.net/p/octave/io/rev/19ede9372f7b>, crediting you and adding
you into the copyright.
As announced I've amended the code for writing the data blob to do it in ~100
MB chunks, after I found that trying to write a heterogeneous cell array of
10,000,000x10 (code to create it attached, make_big_aa.m) gave dbf files that
were missing the last 6 % or so from the data. That is, on my Core i5 8 GB RAM
system with an SSD.
Thanks to your vectorized code (somewhat amended by me) writing time
(tic...toc) of that big array went down from 4182.68 seconds to 532.211
seconds, not bad :-)
After that I've made various other improvements in the code.
A.o., I enclosed most of the writing code in an unwind_protect block to be
able to properly close the file in case of errors. I suppose the motive behind
you adding an "fclose (all)" statement was that you were plagued by zombie
file handles that couldn't be accessed anymore after dbfwrite "crashed", let
alone having no option to get those files deleted until after restarting
Again thanks very much for your contribution.
Additional Item Attachment:
File name: make_big_aa.m Size:0 KB
Reply to this item at:
Message sent via Savannah
- [Octave-bug-tracker] [bug #53899] [octave forge] (io) Octave crashes when writing large dbf files with dbfwrite,
Philip Nienhuis <=