Currently I can build code to a byte vector and execute it and now most of the assembler works.
My plan is to keep the current byte-code as a on file media and then allow to compile the code to a simple native format
(Say a 10-20x increase in size). I plan add a new type of objectcode datastructure where we will run the code in native mode If a new function is evaluated that is not compiled to native it will compile it and use that so that the usual lazyness will be employed.
The complications of this method will be very low. It will basically branch off to support code for expensive instructions and try to emit at most say 10 native instructions for every byte opcode. In all not much more then a glorified virtual engine. However I would expect some loops with simple instructions to be as fast as what wingo just posted on the list e.g. A simple loop will do 250-500M turns in a second.
My personal view is that improvement of our code base has to be incrementally and that experiments of reasonable size and scope should be done, then we can learn and at some point carve out a step forward.
I think this is a doable experiment. I have a question here. I would need to add some code to guile in order to hook in this machine and it would
be nice to try that out in a guile branch. Shall I do all this in a local repo or do you want to join. I fell like adding another guile repo in github or gitorious is pushing things a litle and maybe we could start a guile-wipedy-jit branch.
The reason I wanted to fork sbcl is that it has assemblers for x86,x86-64,alpha,hppa,sparc,ppc,mips e.g a few targets. It would be nice to know what targets to focus on or if we need to add anyone else to the list!