swarm-modeling
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Swarm-Modelling] comparing models


From: Andy Cleary
Subject: Re: [Swarm-Modelling] comparing models
Date: Tue, 02 Sep 2003 12:47:19 -0700

At 09:24 AM 9/2/2003 -0700, you wrote:

I wanted to comment on this before; but, I haven't had much time.

The _best_ way to go about comparing models is pretty simple, actually.

First off, completely avoid drawing conclusions about what the model(s)
_mean_.  Don't run off saying that the results of a model imply we
should, say, "change economic policy", "kill all the mosquitos in a
5-mile radius of Cleveland", "invest a bunch of money in cloning",
etc.  This point may be obvious; but, sometimes it bears repeating.

"Modeling" is _not_ science and it's not engineering.  As such, it
doesn't _produce_ anything useful in and of itself.

This holds true for "verification" and "validation", as well.  When
you go about "validating" your model by comparing its outputs to
either the output of a real system or the outputs of another model,
all you're doing is measuring the differences between two sets of
data.  That data says, literally, nothing about where it came from.
You could have gotten either data set by decrypting messages from the
Dog Star.

Second, when presenting results from a model, simply present the
motivation for the model, the process by which you created the model,
the model, the process by which measurements are taken off the
model, and the measurements of the model.  (Present the same collection
when you present other models or the real system.)

When presenting the "validation" or comparisons and contrasts to other
models or a real system, simply present the two sets of data, present
the motivations for how you compare the two, how you compare the two,
and what the result of the comparison is.

There will always be legitimate reasons to question any one part of
this collection, including the motivation for doing the work, the
processes for creating the models, taking measurements from the real
system, etc.  Even if you're "Bob-the-God-of-this-domain", there will
always be valid objections to any given part of what you've done.
The reasons this is true (and will always be true) is because modeling
is not an automatable process.

So, as to your questions about which techniques are best, just pick a
few, do the work, write down the results.  Pick a few more, do the
work, write down the results.  Etc.  If a sizable sampling of
techniques (e.g. 3 statistical, 2 from feature extraction, 1
state-space reconstruction, 2 in signal analysis) all give you a
certain result (e.g. model 1 and model 2 lead to the same
conclusions), then it may be worth pointing that out to some audience.

I don't disagree with you, but if you tried selling this as "validation" to people used to *physics*, you would not get very far.

Or to make it more concrete, *I* have not gotten very far in the same circumstances...

Cheers,
Andy



reply via email to

[Prev in Thread] Current Thread [Next in Thread]