[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Help-gnucap] Time Complexity of dc analysis

From: Al Davis
Subject: Re: [Help-gnucap] Time Complexity of dc analysis
Date: Mon, 6 Oct 2003 02:22:29 -0400
User-agent: KMail/1.5.3

Sorry about the delay.  I am trying to get settled in a "new?" 
house, and it has been quite a distraction.  Moving is always 
stressful.  Having moved several times, this one seems like the 
most stressful yet.

On Sunday 07 September 2003 09:03 pm, Matt wrote:
> I was hoping to get some information on the time complexity
> of this dc analysis. Simple empirical measurements using a
> function profiler were inconclusive, and I haven't located
> the regions in the source code which perform the steps of the
> dc analysis -- in particular the Kirchoff nodal voltage bit.
> I would love to find out how the time performance of this
> analysis varies with circuit complexity, in relation to the
> number of resistors in the circuit I guess, or the number of
> nodes and edges in graph theory terms...

The time complexity of the DC analysis cannot be determined in a 
simple way.  In theory, in the worst case, it is cubic.  Spice 
is too.

In practice, heuristics such as sparse matrix techniques, 
exploitation of special properties of parts of the circuit make 
it considerably faster than that.

I say it is cubic in the worst case, because it uses LU 
decomposition, which is in theory a cubic time algorithm.  That 
theory applies to a full matrix.  Circuit matrices are sparse, 
so heuristics are applied (aka "sparse matrix techniques") to 
speed it up for matrices that have a lot of zeros.  Most of the 
entries are zero.  I have seen large systems where 99.9% of the 
entries are zero, or that have a "density" of less than .1%.  
The status command will give you this number.  The matrix 
solver exploits this, by not operating on or storing most of 
the zeros, speeding up this step considerably.  Spice does 
this, too.  Gnucap does store some of the zeros, because they 
get filled in by the LU decomposition.  A good approximation of 
this in Gnucap is n^3 * density ^2 .  Density is a decimal.  If 
it is printed as "1%", this means .01.  Usually density 
decreases as matrix size increases.  Usually the "bandwidth" of 
the matrix is relatively constant.  It is approximately twice 
the average number of connections to a node.  With this in 
mind, the effective time complexity to do LU decomposition 
seems to be approximately linear.

A further acceleration in Gnucap is that it keeps track of 
changes to the matrix, so on subsequent iterations it only does 
a partial solution.  This can speed up a transient analysis 
significantly.  I have seen 100:1 speedups.

Another time consumer is equation ordering.  Gnucap's ordering 
is very primitive, and takes linear time.  It is block oriented 
and gives priority to preserving a "bordered-block-diagonal" 
form for the matrix.  This makes the effective matrix density a 
little higher than the classic Markowitz ordering, which slows 
down the first iteration, but opens opportunities for other 
enhancements later.  Markowitz ordering (used in older versions 
of Spice, never in Gnucap) takes quadratic time.  In older 
versions of Spice, it really does take quadratic time, but for 
the small circuits that most people did then, device evaluation 
dominates, leading to emperical claims of something like "order 
1.4".  Since it is linear, and the coefficient is low, the 
ordering time in Gnucap is insignificant.

The next area where time is spent is model evaluation.  Usually 
this dominates.  In the simple case, the time is proportional 
to the number of models.  (of course...)  In practice, more 
complex circuits usually increase the iteration count, leading 
to some increase here.  Also, Gnucap applies heuristics to try 
to avoid model evaluations.  This can speed things up 
considerable.  Gnucap's algorithms here are much more robust, 
and much more effective, than those in Spice.  Spice uses a 
simple bypass with inadequate checking, so it often introduces 
errors.  Gnucap checks more carefully, avoiding the increase in 
error, but applies it more aggressively, even to resistors, 
resulting in a considerable speedup.  When Gnucap skips a model 
evaluation, it also skips matrix loading, and usually also 
skips solving that part of the matrix.  This combination 
results in speedup much more than would be expected from the 
sum of the tricks alone.  In effect, the time complexity of 
this stage seems to be sub-linear.

Another place Gnucap spends time is in a "review" pass, where it 
checks errors.  It is sort of like a model evaluation, and can 
be thought of as one when you are estimating time.

Yet another place to spend time is in the "setup" pass.  This is 
done only once in a run, but for some components (resistors) 
this is the dominanant time consumer.  It makes a pass over all 
components, so that is linear time.

There are also queues.  The load queue is a simple queue, so 
takes linear time.  The event queue is a priority queue, so the 
run time is n log n, where n is the number of events currently 
in the queue.  This number is usually small.

That is a start.  I hope it helps.


reply via email to

[Prev in Thread] Current Thread [Next in Thread]