[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: About diagonal matrices
From: |
Jaroslav Hajek |
Subject: |
Re: About diagonal matrices |
Date: |
Mon, 2 Mar 2009 10:19:04 +0100 |
On Mon, Mar 2, 2009 at 9:43 AM, Daniel J Sebald <address@hidden> wrote:
> Jaroslav Hajek wrote:
>
>>>> Yes. That's the simple case. Now please describe how to multiply two
>>>> sparse matrices.
>>>> Or, to get you something simple, how you multiply a full * sparse
>>>> matrix.
>>>
>>> These are all judgements and why I said "no matter how one defines", but
>>> I'd
>>> probably rule out the mixture of the two and think of it as either full
>>> is
>>> converted to sparse or vice versa. Say S is a sparse matrix and F is a
>>> full
>>> matrix. I'm gathering that you feel
>>>
>>> sparse(F) * S
>>>
>>> and
>>>
>>> F * full(S)
>>>
>>> don't necessarily yield the same result. I can see that being useful.
>>>
>>
>>
>> If they don't, then probably the whole effort is just pointless - what
>> problem does it solve?.
>
> Guess I'm not following anymore either. Didn't the thread start with John
> noticing the same matrix represented as sparse or full produced two
> different results. The you argued they should be left as is?
>
Sorry, maybe I confused things a little more. John noticed that a
diagonal matrix can produce slightly different results than an
equivalent full matrix in the same place, and requested that the issue
be fixed. I objected that the behavior is mathematically proper and is
common with sparse matrices as well. Then, the discussion started
whether it is actually correct.
If you agree that a sparse (or diagonal) matrix *can* produce
different results, then I think you have no problem to solve, because
that's the current situation.
If you don't, then I misunderstood the sentence "I can see that being useful."
>
> But let's say it's OK. I was just asking how
>>
>> do you imagine the implementation of sparse (with your general default
>> sparse value) * full matrix product.
>
> The full matrix is first converted to sparse first, as I explained last
> time.
Well, that only makes things worse, doesn't it? See below. Besides,
that would be so terribly inefficient that I would strongly oppose it
just on the basis of performance.
>
>> Or sparse * sparse, with possibly two different default values.
>
> The sparse value for matrix M1 is, say, 0. The sparse value for matrix M2
> is Inf. The sparse value for M1.*M2 is 0 * Inf = NaN, just as with regular
> octave math; the sparse set is S1 & S2 where & is intersection. For M1*M2
> the sparse values don't contribute to the inner product sums, similar to the
> current behavior with zeros.
>
What? Why do you think they don't contribute? They can be *any*
numbers, if I understood correctly, and even if they are only limited
to 0, +- Inf and NaNs, they still *do* contribute, if you want the
equivalent results as with full matrices.
>
>> Try to
>> lay it down on a paper, I think you'll see it's really not that easy.
>> Especially if you want to catch all the possible cases with NaNs and
>> Infs.
>
> I'm trying to resolve what seemed like a conflict with discussion about NaNs
> "disappearing", as David described it.
>
Yes, OK, I see that. I'm just trying to point out that if you want to
solve the problem of "disappearing NaNs" completely, i.e. that sparse
matrices behave identically to full matrices w.r.t. NaNs and Infs, it
will get terribly complex.
>
>> Yes, in this simple case, it looks nice, but it really gets more
>> complicated when you dive in. Besides, none of our external software
>> (UMFPACK) can work with such matrices.
>
> It's more about presenting the results to the user than a complete rewrite
> of the software. The underlying set of NNZ (NNS) indeces would remain the
> same.
>
Huh? Suppose you have a normal sparse matrix, A, with default sparse
value = 0, and A2, with the same sparsity pattern, but default sparse
value = 2. Then of course A1 \ b is very different from A \ b!
This is what I've been referring to. UMFPACK cannot solve the first
problem, so you'd need to somehow convert it to the second. Here, the
SMW formula will save you, but you will have problems with getting the
LU factorization for instance.
Of course you can convert A1 to the default sparse form (i.e. with
default sparse value = 0) but then you can as well do that always.
>
>>>>>>> The value of the sparse matrix is when it comes time to use it in
>>>>>>> operations
>>>>>>> where a full matrix would consume the CPU. So it does make sense to
>>>>>>> keep
>>>>>>> track of the sparse set.
>>>>>>>
>>>>>>
>>>>>>
>>>>>> Huh? I don't understand.
>>>>>
>>>>> What is the purpose of sparse sets? Represent large matrices that are
>>>>> mostly zero (or some default value, I argue). Also, when solving
>>>>> matrix
>>>>> operations or systems of equations that are sparse, much of the
>>>>> computations
>>>>> can be discarded.
>>>>>
>>>>
>>>>
>>>> OK, that's the general idea of sparse matrices. How does it relate to
>>>> our specific problem?
>>>
>>> Just trying to agree with you, i.e., we should keep track of the sparse
>>> elements of the matrix.
>>>
>>
>> But we do. Or, to prevent more confusion, what are "sparse elements"?
>
> If it is a big matrix with a bunch of zeros; the zeros are the sparse
> elements of the matrix. If said matrix is divided by scalar 0, the bunch of
> zeros turn into a bunch of NaNs and now the NaNs are the sparse elements.
>
> The original comment was in reply to David and I was re-iterating your point
> about not wanting the matrix memory to blow up by converting from sparse to
> full. Rather than convert to full matrix, keep track of the index set
> (i.e., the set of indeces where the matrix elements are NNZ, or what I
> suggested should be NNS). We're not all arguing against all the points
> you've made.
>
OK, I see. The problem with such a solution, which I've been trying to
point out, is that operations with matrices that have the default
sparse value other than zero become very complicated.
> Dan
>
--
RNDr. Jaroslav Hajek
computing expert
Aeronautical Research and Test Institute (VZLU)
Prague, Czech Republic
url: www.highegg.matfyz.cz
- Re: About diagonal matrices, dbateman, 2009/03/01
- Re: About diagonal matrices, Daniel J Sebald, 2009/03/01
- Re: About diagonal matrices, Jaroslav Hajek, 2009/03/01
- Re: About diagonal matrices, Daniel J Sebald, 2009/03/01
- Re: About diagonal matrices, Jaroslav Hajek, 2009/03/01
- Re: About diagonal matrices, Daniel J Sebald, 2009/03/01
- Re: About diagonal matrices, Jaroslav Hajek, 2009/03/02
- Re: About diagonal matrices, Daniel J Sebald, 2009/03/02
- Re: About diagonal matrices,
Jaroslav Hajek <=
- Re: About diagonal matrices, Jaroslav Hajek, 2009/03/01