|
From: | Paolo Bonzini |
Subject: | Re: [Lightning] SPARC Fixes |
Date: | Tue, 31 Oct 2006 15:35:21 +0100 |
User-agent: | Thunderbird 1.5.0.7 (Macintosh/20060909) |
This is actually superseded by what I wrote in the other e-mail. Sorry about that.Another possibility (simpler for now to implement) is that if JIT_MAX_STACK_AUTOMATIC_AREA is defined, clients should be careful not to push more than that amount of words.You mean we should just clearly document this macro?
If you look at code produced by compilers, they rarely if ever push/pop. What they do, is they allocate an area to "spill" in the variables that don't fit into the registers. Even if the original code is stack-based bytecode, it's pretty easy to track the stack height at every point of the bytecodes, assuming it's balanced on control flow junctions. Which is true for practically everything but Forth. Then, using indexed loads instead of continuously updating the hardware stack pointer will be faster on most machines.Or we could just deprecate pushr/popr and define a better way to provide stack space for spills.For 1.2, we have to keep them anyway. Even beyond, is may be a nice thing to have because it probably looks familiar to many people.
It does mean we have to adjust the rpn.c example. :-)
Instead of having a hard limit on the number of registers that may be pushed, we could also patch the `save' instruction based on the actual number of `pushr' encountered in the function's code, but I'm not sure it's worth it.That would be the way to provide stack space for spills -- a lightning instruction patching the `save' on the SPARC, and on other architectures compiling to subr JIT_SP, JIT_SP, N.What does "spills" mean here?
Automatic variables that do not fit in registers.
It's actually pretty easy. Easier than fixing the SETHI/OR part as done to patch jit_patch_movi.I haven't looked at instruction encoding so I have no idea whether patching `save' would be an easy task (I guess it shouldn't be too hard).
Paolo
[Prev in Thread] | Current Thread | [Next in Thread] |