swarm-modeling
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Swarm-Modelling] Announce: metaABM 1.0.0


From: Miles T. Parker
Subject: Re: [Swarm-Modelling] Announce: metaABM 1.0.0
Date: Mon, 26 Nov 2007 11:34:56 -0800


On Nov 25, 2007, at 10:23 PM, Marcus G. Daniels wrote:

Hi Miles,
There is I think a hidden assumption here that there is something preferable about having a single syntactic and semantic structure for everything (*the* programming language), and I am beginning to find *that* assumption limiting. I appreciate the elegance of a top- to-bottom syntactic structure, and I would love to see that whole approach explored and implemented.
Well, I think there are some CS issues. There are an increasing body of programs that are useful and address hard problems but need maintenance to keep them understood and useful. I'd be happy if I could run the same compiler that makes me object code to make flavors of embeddable meta-model by dumping abstract syntax trees through some sequence of appropriate checks and algebra, but it's the exception not the rule when this is possible.

btw, the oAW checks language (its actually unified with the extend transformation language now) is designed to do precisely that, including automatically generating markers in the target editor(s).

Compiler flags could enforce certain usage to make this easier (much like C99 compliance checks). One could imagine different kinds of high level virtual machines that could run nearby language lineages as one, and static analysis tools that would give guidance on the nature of incompatibilities between more distant lineages.

Yep. "Warning: if you do this, you will no longer be able to represent it is as an XYZSim".

In contrast, with Swarm, Java and XPCOM are supported using an abstract notion of target objects and messages, as realized by C data structures for a lowest common denominator. It introduces target/message abstractions which don't capture the richness of what any given language can do (multiple inheritance in C++) and in other cases it requires adding stuff to the client environment (e.g. Selectors for non-Objective C languages). That approach facilitates integration with about any popular programming language environment, but it doesn't actually describe what Swarm does when it sends messages (e.g. scheduling semantics).

Right -- the thing to do is to avoid using an LCD foreign construct and instead create good GCDs, which means thinking carefully about how to distribute the entire set of representations into discrete meta- models.

However, with the kind of compiler features above, one might imagine compiling some file and getting it back as a set of variable mappings and propositions about what happens to state it touches, or just adding annotations to the interfaces such that propositions could be checked by the compiler or by runtime instrumentations. Eventually with enough propositions the implementation code could be generated on demand, and thus the original code discarded. (I'm not actually suggesting doing this with Swarm, it's just for the sake of an example.) The same sort of process would work with others codes too, like model implementations. The point is that a sufficiently descriptive interface can be the implementation. So, by converting toolkits and models into a database of mappings and propositions, more kinds of logic tools can be used check for correctness and find non-obvious invariants or even scaling laws (stupid things to simulate if they don't result in complexity). The meta representation makes it possible to quickly migrate between toolkits [as you've said], consider novel platforms like Neno (http://neno.lanl.gov ), and make visual modeling tools [as you have].

Exactly!

In my mind, the thing to optimize is transparency not purity.

My POV as well - I think that is why I prefer a common representation for the meta-model -- if this makes sense -- doesn't attempt at a top to bottom implementation.

Transparency is less useful when things aren't simple, but sometimes practical considerations demand some complexity, e.g. a simulation needs to run fast or interoperate with some third party software. It's easy in a handwave to declare "the legacy code must go", but legacy code is most of the code that actually gets real work done. Merely wrapping it or translating it isn't enough to give insight about what it does, and it isn't exactly realistic to think people will invent or use appropriate domain specific declarative languages and in a few spare moments of infinite clarity map their legacy [simulation] codes to them.

No, but we can facilitate that, (My little stab at this is to provide an import tool to suck the interface level -- i.e. no actual rule implementations -- into agent descriptions) and expect that over (a long) time models will migrate. This is what happened with Java after all.. and part of that process was that the Java APIs co-evolved to accommodate those usages. I actually don't think the JetBrains / Fowler vision of ubiquitous ad hoc DSLs generated as part of the coding process is realistic or desirable (!) -- instead I think what will/should really happen is basically what happens now with frameworks -- a few experts creating meta-models and front-end languages that a much larger group of developers utilize.




reply via email to

[Prev in Thread] Current Thread [Next in Thread]