swarm-modeling
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Swarm-Modelling] Announce: metaABM 1.0.0


From: Miles T.Parker
Subject: Re: [Swarm-Modelling] Announce: metaABM 1.0.0
Date: Fri, 23 Nov 2007 11:38:01 -0800



Interesting issues. But first, I should point out that you seem to be addressing some of your concerns toward a different issue than I've been addressing. My target is most definitely not a General Purpose Language, and I really don't have much of an opinion about that! Somewhere along the way you got the impression that I was positing some kind of be-all-and-end-all -- my apologies if I gave any such impression. The following should give a decent overview of the motivating ideas..

DSL / MDSD background:


Original conception for meta-model stuff:


On Nov 22, 2007, at 6:00 PM, Marcus G. Daniels wrote:

Miles T. Parker wrote:
The economic drivers were  all about who could take the basic representation and stretch it as far as it could go. That in turn meant that companies began to do all kinds of crazy things..
But it's not actually possible to write general purpose programs in SQL.   It's a language that addresses query.

Right -- that is really the point. (And yes, I am aware that SQL is not general purpose. ;)) I also need to say that I bring up SQL as an interesting example of the power of the DSL approach -- not an exemplar of the approach.

Worse, the interface to languages that _are_ general purpose is not standard -- client libraries vary from vendor to vendor.  For that matter SQL _itself_ varies a lot across vendors.  Most of the time if you just take one non-trivial SQL-based application and plug in another RDBMS, the application will not work, unless there is an intermediate library (e.g. JDBC) is known to support both.

Yes, and these intermediate libraries like JDBC do a pretty poor job of even that! (I spent many months in the 97 building an abstraction layer on top of JDBC -- Oracle seemed to be the worst offender, but I guess it depends on your starting POV.)


 In practice, a lot of typeless strings get passed around and SQL syntax goes unchecked, even though the programs embedding those strings may be otherwise correct.  Further, SQL fails to specify (or infer) expected access patterns.  The same SQL could have terrible performance on a row-oriented database system and fantastic performance on a column-oriented database intended for data mining.    

Again, let's not set SQL up as a straw-man. But are you sure about the "expected access" part? SQL certainly does specify enough to make quite sophisticated inferences about usage -- not much I guess in its interpretative mode or GUI oriented mode, but in its batch mode. For example, if I know that the ultimate line in a batch process is an UPDATE affecting some two columns in some sub-region of a database, I can make a lot of optimizing assumptions about a query - such as to ignore some SELECT statements altogether! Perhaps I'm not clear on the precise meaning of "access patterns" here. I can't reconcile that observation with the statement that the same SQL could have excellent row performance and terrible column performance. Do you mean "the SQL language" for the first part and "SQL implementations" for the second?

One area where SQL implementations historically sucked was in GUI. As you observe, as SQL has no facility for saying what we expect to do with some slice of data *a priori*, GUI frontends often just grab entire tables or even virtual tables spanning every possible presentation and then squeeze them over a pipe. A little known mid-90s db system for the Mac (and later Windows) -- 4th Dimension -- tightly integrated the presentation ("expected access") part with the query part and got phenomenal performance for things like presenting a selection of records to a user. OTOH, this was then a single-tier system with all of the maintenance and scalability issues inherent.

I guess I'd say that ultimately hints about performance really don't belong in the core behavior specification -- but that is not to say that one couldn't provide facilities for 'hints' annotations that allowed engines to incorporate usage knowledge in their optimizations. (OTOH, you are asking a lot of the typical user to say anything useful (or merely *not counter-productive*) about that.) But there is at least one other way to do this -- i.e. through run-time inference. What a lot of people aren't ware of is that some RDBMs (in the 80s) were actually doing runtime monitoring of queries and using that information to generate optimized implementations also at runtime.

RDBMs don't go far to morph representations to fit queries, they fit queries to representations.   

Yeah. If I understand you correctly, that is the whole point of a meta-model really. You take the context / constraints of the domain and use its (heretofore implicit) assumptions to restrict what you can't do so that you leverage what you can. And the only way to allow the universe of possible implementations is to not specify implementation details..as soon as you do so, you freeze out the possibility of any other more efficient approach. 

[And, analogously, have we found the right representations for agent models?]

That is the most challenging and interesting question I think. First, I'm sure we can agree that in the most general sense there is no such thing. The useful part of the question is, "has the practice of ABM evolved to the extent that we can make useful generalizations about typical ABMs, and represent them in a consistent way." I'd say clearly yes, for some subset of ABMs.

Anyway, this is not to argue that inventing or evolving declarative languages isn't a useful goal.  I just think that "..a common broker that opens up all sorts of possibilities for collaboration on either end" does not by itself mean that scientific progress will occur.

Right, in fact I think I remember you presenting a lot on XML based declarative approaches, so you know of what you speak.. 

Finally I should be clear that I am not actually even speaking of some kind of pure declarative approach. I think you will see in the metaABM representation that there are a lot of aspects that are really not declarative, and in fact have a pretty strong imperative flavor. Instead, what we've tried to do is to abstract out all of those imperatives that don't affect final outcomes and make implicit those that do. I know that sounds a bit contradictory and perhaps I should flesh out a bit more about what I mean, but this is probably enough for now.. 

  With SQL, what it meant was that Oracle occurred.

:D see above..

  After using db4o (http://www.db4o.com) for a while, or LINQ, I kind wish SQL wasn't the popular way to do query!

No doubt. As my other response argues, let's just not make the enemy of the good be the perfect. We won't know until we make a real try of it..

Miles T. Parker wrote:
For example, if someone has a great idea for a little ABM language, they don't have to worry that there target platform is going to become irrelevant and they'll have to do all of the heavy-lifting all over again.
There will always be a need for some code base that takes an abstract form and acts on it.   That code base, like any code base, could go unmaintained.  

Yet another reason you want to remove implementation from the specification.

And while compression is a useful thing, it is different thing from invention.   I can't help but wonder if the process of compression (e.g. finding high-level interfaces that can support various toolkit backends -- toolkits that are themselves mostly just simplifying projections of larger software APIs) interferes with the process of invention (e.g. finding ways to represent models that actually help give insight).  

That is a deep point -- but one I must say that is part of the maturation of all disciplines -- and I do mean discipline at least partially in the sense of Foucault. The tragedy of scientific process being that as methodologies become more ubiquitous and efficient, their potential for generating truly unique and surprising hypothesis diminshes.. But this is no more true of meta-models then of APIs and frameworks -- the virtue of meta-models being that they help to define precisely where the contextual boundary lies.

anyway, blah blah blah...





reply via email to

[Prev in Thread] Current Thread [Next in Thread]