[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Native code generation and gcc

From: Mikael Djurfeldt
Subject: Re: Native code generation and gcc
Date: Sun, 11 Dec 2016 19:09:07 +0100

Many thanks for these links!

It seems like the GCC JIT interface is the kind of "adaptation" of gcc which I asked for. :-)

Then there's the calling convention problem which Helmut brough up earlier in this thread. But I guess there could be workarounds. In any case one would have to look closer regarding this.

Regarding "hotness":

The original GOOPS implementation had a somewhat crazy feature that an application of a generic function to a specific argument list first resulted in the standard MOP procedure for finding a set of applicable methods and, second, from this/these generated something called a "cmethod" (compiled method) which, in turn, was stored in a cache as well as applied to the list of arguments.

Next time this generic function was applied to an argument list with the same type signature, the *same* cmethod as had been used the first time could be very quickly looked up in the cache. (This lookup is described in doc/goops.mail in the repository.)

The thought behind this was that when a cmethod is compiled, there is knowledge about the specific types of the arguments. This means that a compiler which compiles the applicable method into a cmethod can do some of the type dispatch during compile time, for example that of slot access. This is partially equivalent to unboxing, but more general, since some of the *generic function applications* can have their type dispatch resolved at compile time too. In the most ambitious approach one would include return values in the cmethod type signature---something which is natural to do when compiling to cps. (This type dispatch elimination was never implemented in GOOPS.)

I was curious how much impact this caching scheme of things would have in real-world programs. It turned out to work very well. I'm only aware of one complaint on memory use. Obviously, though, if a generic function with a longer argument list is repeatedly called with different type signatures of the argument list, this could lead to a combinatorial explosion and fill up memory (as well as being rather inefficient).

When Andy re-wrote GOOPS for the new compiler, the cmethod caching was removed---a sensible thing to do in my mind. *But*, some of the downsides of this scheme could be removed if hotness counting was added to the cache. One could do it in various ways. One could be to initially just associate the argument list type signature with a counter. If this counter reaches a certain threshold, the applicable method(s) is/are compiled into a cmethod stored in the cache. The storage of type signatures and counters still has the combinatorial explosion problem. This could now be avoided by limiting the size of the cache such that the counters compete for available space. (There are further issues to consider such as adaptability through forgetting, but I won't make this discussion even more complicated.)

Best regards,

On Mon, Dec 5, 2016 at 5:18 PM, Lluís Vilanova <address@hidden> wrote:
Mikael Djurfeldt writes:

> [I apologize beforehand for being completely out of context.]
> Are there fundamental reasons for not re-using the gcc backends for native code generation? I'm thinking of the (im?)possibility to convert the cps to some of the intermediate languages of gcc.

> If it wouldn't cause bad constraints the obvious gain is the many targets (for free), the gcc optimizations, not having to maintain backends and free future development.

> Of course, there's the practical problem that gcc needs to be adapted for this kind of use---but shouldn't it be adapted that way anyway? :)

> Just an (old) idea...

> Mikael

Guile 2.1 has a register-base bytecode VM that makes using a code generation
library like GNU lightning [1] a convenient alternative. In fact, that's the
library used by nash [2] (an experimental Guile VM that generates native code
for hot routines). You also have the experimental GCC JIT interface [3] to
achieve similar goals (available upstream since GCC 5, I think).

IMO, if guile wants to go the tracing JIT way (like nash), it should store the
CPS representation of routines to be able to iteratively apply more heavy-weight
optimizations as the routine becomes hotter (called more frequently).

For example, you could start with the current state. If the routine is called
many times with the same argument types, you can create a version specialized
for these types, opening more unboxing possibilities (the routine entry point
would then have to be a version dispatcher). If a routine version later becomes
hotter, re-compile that version into native code.

One open question is whether the VM needs to be changed to count routine
"hotness" efficiently (as in nash), or if a simple routine prelude inserted by
guile's compiler tower could do that almost as efficiently (the bytecode ISA
might need new atomic integer operations to cope with routine tracing in a
multi-threaded app).

Also, these all are no small tasks.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]