gnucap-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Gnucap-devel] Interesting paper in IEEE Transactions on CAD


From: al davis
Subject: [Gnucap-devel] Interesting paper in IEEE Transactions on CAD
Date: Thu, 20 Jul 2006 22:55:20 -0400
User-agent: KMail/1.9.1

There is an interesting paper ...
"SILCA: SPICE-Accurate Iterative Linear-Centric Analysis for 
Efficient Time-Domain Simulation of VLSI Circuits With Strong 
Parasitic Couplings" by Zhai Li and Richard Shi.

It describes a "new" method based on low rank matrix updates, 
and playing tricks with the derivative, and claims some 
impressive results.

What is especially interesting is that Gnucap already does that.  
Well .. it does something close to what the article says, with 
similar results.   He is using a method that I looked into and 
rejected.  The gnucap method is more general.  His method 
apparently only applies to a special type of circuit.  The 
gnucap method is as general as Spice.  He doesn't reference 
gnucap at all, in spite of the similarities.

The basis is that LU decomposition is a big time consumer for 
large mostly linear circuits and by doing low rank updates and 
partial solutions that time can be cut dramatically. That's 
true. Gnucap already does that and always has. The predecessor 
(ACS) did too and always has. You can turn off the partial 
solutions with the option "nolubypass". You can turn off the 
low rank updates with the option "noincmode".

The key idea presented is to keep the matrix as constant as 
possible.  They use two tricks to get there.  One is a "FLC" 
(fixed leading coefficient) differentiation method.  The other 
is a "SVC" (successive variable chord) alternative to Newton's 
method.  The "FLC" method is one that I experimented with and 
rejected, over 15 years ago.  The alternative to Newton's 
method is something I have experimented with from time to time, 
but had not reached any conclusions.  In both cases, Gnucap 
does something related to what they discuss.

First, lets discuss the differentiation method.  (not in the 
paper..)  Gnucap gives a choice between "trapezoidal" (also 
known as "Heun's method") and the "backward Euler" method.  
Spice also offers the "Gear" method.  I had been planning to 
add this one in gnucap, but there have always been higher 
priorities.  If you could intelligently choose between backward 
Euler of Trapezoidal, one of these would always be better than 
Gear, but Gear is usually better than the wrong choice of these 
two.

(Still not in the paper..)  All of these methods are derived 
from polynomial interpolation.  You fit an "interpolating 
polynomial" to known values and derivatives, and use that to 
predict the next point.  There are many strategies for choosing 
the points and their weights, with some rules such as 
stability, consistency and convergence.  The paper does not 
reference this general knowledge.  Some numerical analysis texts 
don't reference this general knowledge.  It makes me wonder if 
they know it.

Now, to the paper...
The "fixed leading coefficient" strategy, discussed in the 
paper,  requires one coefficient (that multiplies the most 
recent previous value) to be held constant regardless of step 
size.  Then you tweek the others to maintain consistency and 
convergence.  Keeping the leading coefficient constant is 
beneficial because this is the number that is stamped into the 
matrix.  Keeping it constant allows the matrix solution to be 
bypassed.  If most of them can be kept constant, the matrix 
update is "low rank", and a considerable speedup can be 
achieved.

The problem is that the stability and truncation error 
properties are inferior to the simpler methods.  So, he tosses 
this one and use an iterative technique.  This is not Newton's 
method, but more like a relaxation technique.  The iterative 
technique is still not as good as the traditional method, in 
terms of stability and accuracy.  There is a speed cost due to 
iteration.

Now, I ask the question:  "Why would one change the step size 
anyway?"  Answer (not given):  usually for stability or 
truncation error.  We make the steps small enough to keep 
truncation error less than a tolerance, or to avoid stability 
issues.  There is  no discussion of step size control, which 
Spice does poorly.

The paper claims that a range of .625 to 2.5 works well.  That 
is not much, a 4:1 range.  Problem not mentioned in the paper: 
The degradation of truncation error at the extremes of the 
range is worse than 4:1, leading to the need for a step size 
1/4 of what would be used with the trapezoidal method, in cases 
where you need high accuracy.  Where you just need stability, 
in cases where backward Euler is the preferred method, the 
stability is the FLC method is poor.

My response: So how about using the common methods and using a 
step size 1/4 of what truncation error says to do?  This simple 
kluge is more accurate and faster.  This leads to what gnucap 
does:  Allow the step size to be smaller than optimal to keep 
it constant.

What would it take to try the paper's method in gnucap?  We can 
take advantage of the "FPOLY" - "CPOLY" duality.  It is simple.  
Just hold the value of "_i0.f1" constant, within limits.  In a 
little more detail, check the new value of _i0.f1 against the 
old value.  If it is within range, jam in the old value.  The 
automatic conversion to "CPOLY" form will correct the current 
source to compensate.

But there is a problem.  It is now inconsistent and must 
iterate, even on a linear circuit.  The paper admits this and 
claims that roughly 3 times the number of iterations are 
required, relative to Spice.  So, linear circuits now need 6 
iterations, on the average, to converge, sometimes more than 10. 

In contrast, keeping the step size smaller than it needs to be 
may reduce the iteration count (per step) and increase accuracy.  
Reducing the iteration count counters the speed penalty of 
using more steps.

So, the gnucap method provides comparable speedup, while 
maintaining Spice accuracy, even for tricky analog circuits 
like oscillators.  Even a Fourier analysis shows that there is 
no loss.

To credit the paper ..  I did not do such a detailed analysis.  
I did just enough to tell me to move on.

Now, lets look at the "SVC" method alternative to Newton's 
method.  In this case, I have not come up with anything truly 
satisfactory, and the paper does give me more insight.

The idea (in the paper) is that you don't need to use the exact 
value if the derivative.  Now, background info not in the 
paper ...   Recall  ---   x1 = x0 + (f(x)/f'(x)) ..  a 
basic "fixed point" method.  Repeat until f(x) == 0.  It will 
be the same regardless of what you use for f'(x).  Some 
modified Newton methods use something other than f'(x) to get 
around problems such as when f'(x) is 0.

The idea here (in the paper) is to hold f'(x) constant.  It 
works over a range.  There is an optimum.  Go too far in one 
direction (too small) it diverges or goes into a limit cycle.  
In the other direction (too big) it converges very slowly, but 
it is possible to achieve convergence when the real Newton 
method fails.

When the solution is far away, it is necessary to get close, so 
it changes a few times.  When it is close, convergence will 
still happen, but in more iterations.  How many more?  It is 
linear convergence instead of quadratic.  When the result is 
close,  quadratic convergence doubles the number of significant 
digits.  Linear convergence gives some linear factor.  That is 
what it appears looking at it in theory alone.  In practice it 
is better than this.  It is the same kind of "better" that you 
get with the Regula-Falsi method instead of bisection.  
Regula-Falsi is theoretically linear, but often by observation 
it may look almost quadratic.

Having said this, it is an area I want to study more.  It is 
simple to implement.  With the values in FPOLY form, just 
substitute the old value of f1 when the new value is close 
enough.  The intelligent choice of f1 is still somewhat of a 
mystery.  The paper doesn't clarify how to choose effectively, 
but recommends breaking up the curve into segments.  Within a 
segment, use one value for the derivative.  Does this mean the 
models, every one of them, are recoded manually to use the 
table?  I suppose this is fine if you really don't need Spice 
accuracy, or to propagate the values to another kind of 
analysis (AC).

In spite of the similarities, there is no reference to gnucap.  
Gnucap provides similar benefits for the type of circuits being 
discussed, and also provides full Spice accuracy when you need 
it.  The gnucap methods do not increase iteration count, and 
still give the correct derivatives that are needed for AC 
analysis.  The gnucap methods, due to careful choice of time 
stepping based on several issues, actually give better accuracy 
than Spice for Fourier analysis.  There is also no reference to 
CaZM or SAMSON, older academic simulators that provide key 
background.

Although I may sound critical here, I really did enjoy reading 
the paper.  It provides a perspective on some issues that I 
didn't see before.  It gives me some ideas for improvement in 
gnucap.  It provides a formal analysis in some cases where I 
just looked quickly and moved on.





reply via email to

[Prev in Thread] Current Thread [Next in Thread]