[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Gnucap-devel] New version 0.34 and "kneechord" strategy
Re: [Gnucap-devel] New version 0.34 and "kneechord" strategy
Fri, 30 Apr 2004 23:48:56 +1000
On Sun, Mar 14, 2004 at 03:19:14PM -0500, Al Davis wrote:
> copy of mail sent to the list..
> Sorry about the slow response. The list server has been very slow
Faster than my email reaction time (which isn't saying much :-)
> I started on it, then had other things get in the way of finishing it.
> By including it this way, people wanting to get involved can try work
> in progress, but people who just want to use it are not exposed to the
> extra risk of experimental code.
Probably a sensible approach, would there be an easy way to make
the configure script support options for experimental code?
I'm thinking that some users may want to test experimental algorithms
from the perspective of how it works in their circuits even when
they don't know much about how to write programs or Makefiles.
> Some other experimental code is
> compiled in and available through undocumented options.
And there probably aren't many people who know its there or how to
> It is not called for purely linear components. Actually, even "do_tr"
> is not called for purely linear components, except in rare
> circumstances. Because of these rare circumstances, it must work.
Well it does work (in as much as it gives an answer) but it wastes a
lot of cycles achieving nothing (and gives a warning).
> Maybe a fixed number of iterations here is not a good idea, but these
> things happen in experimental code.
Is there a better way to experimentally decide that a function is
not reversible? I mean, for every method you can find a pathological
case (like a discontinuous function) but is there some shortcut that
makes realistic decisions about material behaviour?
> A similar situation could exist for a nonlinear component operating in a
> linear region, where no iterations take it outside.
Currently the kneechord handles that pretty well provided it can find
an inverse of the function. Since it starts searching for the inverse
at the current (best guess) operating point and since linear functions
are easy to invert it will rapidly converge within a few cycles -- not
much wastage. The uninvertible function is the one that really stumps it
and a constant is probably the most common case.
> > My suggestion is that has_tr_eval() should not return boolean but
> > return an unsigned 32 bit number that contains useful flags like:
> > * Does it depend on time ?
> > * Does it depend on local element x ?
> > * Is it non-linear ?
> That violates encapsulation, so it will not change.
Well in principle, a bitfield does not violate encapsulation but each
object has to fetch back the bitfield from its parent and then merge
in its own bits. Say this were java, making use of parent functions is
easy, in C++ I believe it is more painful. At any rate, having a
meta-data system of any structure is better than none at all.
> "has_tr_eval" tells
> you whether the element has a tr_eval. Nothing else. There are other
> query methods that give different information. The way to handle this
> is to add other query methods "depends_on_time()", "depends_on_x()",
> "is_nonlinear()", etc. as needed. These would all be inline const
> methods returning bool.
That's also a good way to do it, especially if all of the boolean results
are completely independent. Note that they probably have to be virtual
methods so I doubt they can be inlined by the compiler (you could get
a pointer to an element and have no idea what the real class of the
element is but still want to call the method, thus it must go through
a function call). The inline or non-inline of these functions is not
a big concern because they will be used by the more complex solver algorithms
to make strategic decisions and such decisions will save much more time
than the function jump-table overhead.
> > And probably a few other things like whether the first derivative
> > is available (current presumption is yes but that may not always
> > be true).
> Not only that... Even if the answer is "no", it is the responsibility
> of tr_eval to supply one. This strategy is used throughout. a few
> examples ... Even if there is no tr_eval(), it is ok to call
> element->tr_eval(), and the fake must give the illusion of the real
Now I think a bit more, the idea of a conductance matrix does depend on
a first derivative being available so I'll agree that all elements must
be able to calculate this or fake it.
> The output of a logic device is a logic state. The voltage
> doesn't exist. Perhaps the node doesn't exist. If there are analog
> components hooked there, it will fake it.
Yeah, the whole idea of the analog simulation will break down without
that... ok the first derivative must always be available. I must admit
that I'm still not up to speed on how gnucap handles interfaces between
pure logic devices as destinct to analog devices. I have been wanting to
build a dlopen() style interface for connecting pure logic devices into
the simulation at runtime but found it too difficult.
The dlopen() call and friends are easy enough but my idea was that
the "plug-in" library would work in just logic states 0, 1, X, Z and
gnucap would handle the interfaces. The theory being that it seems a lot
more programmers can handle working with logic states (and could write
plug-in library code for their own projects or for distribution) than
are confident working with analog functions. At the same time, probably
the weakest point in gnucap is the logic simulation so this would
allow gnucap to expand and also get more programmers interested in
The interface is the hard bit...
> > Possibly also some indication of how expensive a call to
> > tr_eval() is likely to be.
> That is easy to do. Add a method "tr_eval_cost()", that returns some
> constant. How about the number of lines of code in tr_eval()? That
> will be needed as part of cached model evaluation, to make the decision
> of how much to cache. It would belong to the common. Calling
> common->tr_eval() would get the real function. Calling
> element->tr_eval() might fake it by giving you a cached value.
Number of lines is a rough estimate but not an unreasonable one.
I'm not exactly sure what it would be used for but I had some vague
idea that the simulation might make decisions to spend more processing
"effort" when it detects itself getting to a difficult patch.
Also, the user might want some feedback to let them know what the
various strategy options are costing them.
> > If is necessary to keep has_tr_eval() boolean then some other
> > inheritable meta-data mechanism would be useful.
> You need to stop thinking C. :-)
Wait a while, the shine of C++ wears off...
> > I'm not sure how
> > many other non-linear coping strategies are likely to be employed;
> > the secant strategy is probably safe to employ on everything,
> > more complex algorithms (like kneechord) are only beneficial on
> > some element functions.
> secant, false position, relaxation, bisection, various homotopy methods,
> steepest descent minimization, harmonic balance, AWE, krylov subspace
> methods, ......
Secant and false position are easy enough to code up and work in
any number of dimensions (I believe).
Relaxation covers an awful lot of stuff, hmmm.
The kneechord uses bisection to find the inverse of the nonlinear
function (in one dimension) but can this work in many dimensions?
I mean, in two dimensions it would have four regions to search,
in three dimensions, eight, etc. Is there a different way to think
about this? The other problem is that bisection requires that you know
that your answer is between some bounds and I don't think that gnucap
ever works by establishing such bounds (maybe it should?)
I'll skip homotopy, mostly because I'm only interested in time
domain simulation and because I don't really understand it (especially
in the nonlinear case, e.g. simulating a RLC with saturating core
in the L).
There is an interesting approach possible with steepest descent
and I'm going to go out on a limb here a bit. In the R package (search
google for CRAN) there is a nonlinear curve fitter called "nls"
(nonlinear least squares, how surprising) and if you look under the
hood of that bit of code you see that it builds a matrix out of all
the errors and applies a small delta to each of the unknowns to get
a local partial derivative estimate of each error as against each
unknown, then uses a QR system to factorise that, resulting in finding
the ideal step (both magnitude and direction) for the minimum error.
Could gnucap use a similar technique?
For curve fitting you ususally have a lot more errors than you have
unknowns so you get a tall rectangular matrix that isn't too wide.
Probably gnucap has equal errors as unknowns which would be a square
matrix and maybe too hard to factorise. Then again, maybe a lot of
the partial derivatives would be zero... Anyhow, the "nls" package
in R converges in not very many steps (but it has a reasonably large
amount of work to do each step).
The harmonic stuff is way beyond me, if it handles nonlinear time
domain problems that's probably as far as I'll ever push it.
> > I'm also curious if anyone has tried kneechord on saturating
> > semiconductor models. It should be good for that but I haven't
> > tried it myself, if anyone is having convergence problems with
> > such models then it might be worth a try.
> The current implementation doesn't work on semiconductor models, because
> they are multi-variable. Calling it from ELEMENT::tr_eval only works
> for elements that have a simple "x". The multi-variable elements are
> where the help is needed most.
Hmmm, well at least there is no fundmental reason why the kneechord
idea wouldn't work with higher dimensional functions. It comes down
to being able to invert the nonlinear function -- in this case y is a
vector of 2 values and x is a vector of 2 values so you want to vary
x (i.e. walk around on a plane) until you get the desired pair of values
for y. Presuming such inversion is possible... we can find OP1, OP2 and
OP3 pretty easily so we have all the basic bits.
In a single dimension we draw a line between the points OP2 and OP3
which is essentially finding the DELTA = (OP3 - OP2) and then finding
the slope of that (DELTA.y / DELTA.x) so in higher dimensions we can still
subtract the point OP2 from OP3 and the simple y / x becomes a square
2x2 matrix of both y values over both x values. Maybe even 3 dimensions
Hard to say how it would converge, I can't visualise the steps but
in theory it would work much the same.
OK, I'd need to think about this more. Can you come up with a simple
example problem with a 2 input function that gives 2 outputs and which
might be approximately related to the semiconductor model?
|[Prev in Thread]
||[Next in Thread]|
- Re: [Gnucap-devel] New version 0.34 and "kneechord" strategy,
Telford Tendys <=