octave-bug-tracker
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Octave-bug-tracker] [bug #50603] save -mat fails for large (# of bytes)


From: Dan Sebald
Subject: [Octave-bug-tracker] [bug #50603] save -mat fails for large (# of bytes) variables
Date: Wed, 22 Mar 2017 03:31:11 -0400 (EDT)
User-agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:52.0) Gecko/20100101 Firefox/52.0

Follow-up Comment #3, bug #50603 (project octave):

Having just tried the -v7 option on an 8G memory machine, I suggest being very
careful to have no other applications running such that almost all memory is
free.  Otherwise, the system will get caught in a swap-memory trap and have a
really difficult time getting out of it.

First, let me say that the compression option should in theory work faster,
not slower.  The reason is that, as I pointed out in a previous post, barely
any CPU is used for the -mat option and all the time spent is the disk
churning away.  Compression should utilize all the bandwidth of the CPU(s)
creating less memory that needs to be stored to disk and hence less disc
activity.

However, my guess is that the -v7 compression may not be done quite right in
Octave, or at least it isn't done in a memory efficient way.  And it should be
memory efficient because the large data elements are what we'd like
compressed.

When I ran with the -v7 option, because the x variable was 4G memory, it
seemed like Octave wanted to do the compression with some large data buffer to
work with.  So it needed far more memory than my machine had available.  And I
think because the needed memory chunk was so big, it wasn't a little bit of
memory that went into the swap drive, it was 50%.  This brought the system to
a standstill.  Using Cntrl-C on Octave was problematic as well, because then
Octave attempts to store variables to octave-workspace, which of course is
big:

-rw-r--r--. 1 sebald sebald 3115411537 Mar 22 01:46 octave-workspace

Killing the process did clear big swaths of memory, but still everything else
was stuck in swap memory and remained slow.

It may be worth looking at the -v7 compression and see how memory intensive it
is.  If it is using a lot of memory--on the order of the elements being
saved--then it would definitely be worth attempting some algorithm that can
perhaps work on smaller hunks of data at a time while streaming to disk,
thereby keeping the extra memory required for compression small.

    _______________________________________________________

Reply to this item at:

  <http://savannah.gnu.org/bugs/?50603>

_______________________________________________
  Message sent via/by Savannah
  http://savannah.gnu.org/




reply via email to

[Prev in Thread] Current Thread [Next in Thread]