octave-bug-tracker
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Octave-bug-tracker] [bug #47415] out of memory negating a permutation m


From: Carlo de Falco
Subject: [Octave-bug-tracker] [bug #47415] out of memory negating a permutation matrix
Date: Sun, 27 Mar 2016 20:56:58 +0000
User-agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.9; rv:45.0) Gecko/20100101 Firefox/45.0

Follow-up Comment #26, bug #47415 (project octave):

Lachlan, 

You are right that diagonal matrices are not 
a specialization of sparse, sorry.

As for removing permutation matrices I really hope we 
don't choose to go that way!

The reason for adding that specialization, I guess, 
was exactly the use case that brought me to discover 
this bug, i.e., storing the factorization (with reordering) 
of a sparse matrix for later use:


[L, U, P, Q, R] = lu (A);
for ii = 1 : large_number
res = compute_res (ii);
x = Q * (U \ (L \ (P * (R \ res))));
end


which is quite common when solving nonlinear or time dependent PDEs.

Actually I think this (factorization algorithms returning a reordering) 
is almost the only example where the average user will get 
a permutation matrix.

Running the same factorization in Matlab I get the following


>> A = sprandn (100, 100, .02);
>> [L, U, P, Q, R] = lu (A);
>> issparse (P)
ans =
     1
>> issparse (Q)
ans =
     1


So Matlab produces directly sparse matrices for the reorderings, 
therefore for pure compatibility -P and -Q should rather be sparse 
than full (and zeros should be +0 rather than -0) and no one would 
be really surprised if Octave silently converts the permutation 
matrices to sparse, whereas some code may be negatively impacted 
if they are converted to full and generate a memory overflow. 




    _______________________________________________________

Reply to this item at:

  <http://savannah.gnu.org/bugs/?47415>

_______________________________________________
  Message sent via/by Savannah
  http://savannah.gnu.org/




reply via email to

[Prev in Thread] Current Thread [Next in Thread]