swarm-modeling
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Swarm-Modelling] Announce: metaABM 1.0.0


From: Marcus G. Daniels
Subject: Re: [Swarm-Modelling] Announce: metaABM 1.0.0
Date: Sun, 25 Nov 2007 23:23:41 -0700
User-agent: Thunderbird 2.0.0.9 (X11/20071115)

Hi Miles,
There is I think a hidden assumption here that there is something preferable about having a single syntactic and semantic structure for everything (*the* programming language), and I am beginning to find *that* assumption limiting. I appreciate the elegance of a top-to-bottom syntactic structure, and I would love to see that whole approach explored and implemented.
Well, I think there are some CS issues. There are an increasing body of programs that are useful and address hard problems but need maintenance to keep them understood and useful. I'd be happy if I could run the same compiler that makes me object code to make flavors of embeddable meta-model by dumping abstract syntax trees through some sequence of appropriate checks and algebra, but it's the exception not the rule when this is possible. Compiler flags could enforce certain usage to make this easier (much like C99 compliance checks). One could imagine different kinds of high level virtual machines that could run nearby language lineages as one, and static analysis tools that would give guidance on the nature of incompatibilities between more distant lineages. In contrast, with Swarm, Java and XPCOM are supported using an abstract notion of target objects and messages, as realized by C data structures for a lowest common denominator. It introduces target/message abstractions which don't capture the richness of what any given language can do (multiple inheritance in C++) and in other cases it requires adding stuff to the client environment (e.g. Selectors for non-Objective C languages). That approach facilitates integration with about any popular programming language environment, but it doesn't actually describe what Swarm does when it sends messages (e.g. scheduling semantics). However, with the kind of compiler features above, one might imagine compiling some file and getting it back as a set of variable mappings and propositions about what happens to state it touches, or just adding annotations to the interfaces such that propositions could be checked by the compiler or by runtime instrumentations. Eventually with enough propositions the implementation code could be generated on demand, and thus the original code discarded. (I'm not actually suggesting doing this with Swarm, it's just for the sake of an example.) The same sort of process would work with others codes too, like model implementations. The point is that a sufficiently descriptive interface can be the implementation.

So, by converting toolkits and models into a database of mappings and propositions, more kinds of logic tools can be used check for correctness and find non-obvious invariants or even scaling laws (stupid things to simulate if they don't result in complexity). The meta representation makes it possible to quickly migrate between toolkits [as you've said], consider novel platforms like Neno (http://neno.lanl.gov), and make visual modeling tools [as you have].

In my mind, the thing to optimize is transparency not purity. Transparency is less useful when things aren't simple, but sometimes practical considerations demand some complexity, e.g. a simulation needs to run fast or interoperate with some third party software. It's easy in a handwave to declare "the legacy code must go", but legacy code is most of the code that actually gets real work done. Merely wrapping it or translating it isn't enough to give insight about what it does, and it isn't exactly realistic to think people will invent or use appropriate domain specific declarative languages and in a few spare moments of infinite clarity map their legacy [simulation] codes to them.
Marcus


reply via email to

[Prev in Thread] Current Thread [Next in Thread]