[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Critical matrix sparsity

From: siko1056
Subject: Re: Critical matrix sparsity
Date: Thu, 8 Jun 2017 14:36:21 -0700 (PDT)

dariodematties wrote
> Hello People,
> If I have a two-dimensional matrix with double elements.
> Is there a sparsity threshold which let me know if it is convenient to
> save such
> matrix as sparse or as dense?
> Is there any general formula to compute this decision?
> Thanks

For me there is no "golden rule" to use sparse matrices. If you know your
matrices are really sparse you definitely should work with them. Otherwise I
would consider the following things:

1. Do I face memory limitations?
a) A dense m*n-matrix requires about  m*n*sizeof(double) Bytes of memory [1]
b) A sparse m*n* matrix about [(n+1) + nnz(matrix)]*sizeof(int) +
nnz(matrix)*sizeof(double) Bytes of memory [2,3]

=> So the number of nonzero matrix entries should at least be below 50%. If
your matrix is really sparse, then the number of columns n should be smaller
than the number of rows m (transpose if not). Otherwise you waste space for
storing zero columns.

2. Do I often extract parts of the matrix or insert elements?
Randomly extracting or adding parts of a sparse matrix might be a real
performance killer! If you only intent to call system functions on the
sparse matrices (like factorizations or solving systems of equations, which
are optimized for those data structures) you'll face the desired effect.
Otherwise you wisely have to profile your code to detect operations that
slow down the calculation by improper memory access.


[1]: sizeof(double) usually 8 Byte
[2]: nnz(matrix) is the number of nonzero matrix elements, sizeof(int)
usually 4 or 8 Byte
[3]:

View this message in context:
Sent from the Octave - General mailing list archive at

reply via email to

[Prev in Thread] Current Thread [Next in Thread]