[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Fwd: Considering adding a "dispatch" function for compile-time polym

From: David Spies
Subject: Re: Fwd: Considering adding a "dispatch" function for compile-time polymorphism
Date: Tue, 5 Aug 2014 23:50:07 -0600

NDArray is not a valid substitute for Sparse, DiagArray2, or PermMatrix
(as all of these are "sparse" in the sense that they are mostly zeros
and so it's necessary to use the proper nz-iterator types).  As soon as
these matrices exceed the bounds of octave_idx_type, they can no longer
be converted to an NDArray, but long before that point, converting can
result in hideously inefficient behavior and can cause Octave to consume
all of a machine's memory and crash.

I think the same could be said of the Range type.

Perhaps, and for that reason I considered adding Range to dispatch when I found out about it.  I talked to Jordi about it as well. But ultimately I didn't bother because I can't think of any application in which a converted Range constitutes the bulk of the memory consumption.  Since a range is 1-dimensional, anything which interacts with it generally must be at least as large.
Conversely, suppose I have an nx1 vector "u" and an nx3 matrix "w" where n is very large.  If I want to multiply every (3-entry) row of w by the corresponding entry in u, I might say
diag(u) * w
to accomplish that.  Presumably, I'm assuming that diag(u) is an efficient implementation.

But I can't think of any instance where someone might say 1:n where n is large enough that [1:n] consumes a significant amount of memory and not ultimately have to use at least that much memory any way.

reply via email to

[Prev in Thread] Current Thread [Next in Thread]