[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Guile 3 update -- more instruction explosion, intrinsics

From: Andy Wingo
Subject: Guile 3 update -- more instruction explosion, intrinsics
Date: Fri, 13 Apr 2018 11:08:38 +0200
User-agent: Gnus/5.13 (Gnus v5.13) Emacs/25.3 (gnu/linux)

Hello all :)

This mail is an update on Guile 3 progress.  Some previous updates were

I took a break for a while and picked up this work again a couple weeks
ago.  I think the reason I stopped was because I got to agonizing about
how to call out to run-time routines from eventual native code.  Like,
you want to call out from native code to scm_string_set_x; or some more
specialized routine that's part of the ABI but not the API, like an
internal internal_string_set_x routine.  How do you get the pointer to
that code?  How do you manage saving the VM state and restoring it?  Can
you have custom calling conventions?  How do you preserve cross-process
sharing of code pages?

For a while I thought inline caches would be a kind of answer, but after
looking at it for a while I think I am going to punt:

Guile simply isn't all that polymorphic, and the set of runtime routines
callable from native code compiled by Guile is bounded.  Better instead
to simply provide a vtable to native code that contains function
pointers to anything that might need to be callable at run-time.

So that's what I settled on.  I call them "intrinsics" (I would call
them "builtins" but that word is used for something else).  I started
moving over "macro-instructions" that can't be usefully broken apart,
like string-set!, to be intrinsic calls.  AOT-compiled native code will
compile these to vtable calls ("call [reg + offset]"), where reg holds a
pointer to the intrinsics and offset is a fixed offset.  JIT-compiled
native code can inline the intrinsic address of course.

I also pushed ahead with instruction explosion for string-length,
string-ref, the atomics, integer/char conversion, make-closure, and
f64->scm.  I made a bunch more instructions be intrinsics.  The VM is
thus now smaller and has fewer instructions whose implementations
contain internal branches, which means we're getting closer to native

There are still some more instructions to push to intrinsics.  (Q: When
should an instruction be an intrinsic rather than "exploded" (lowered to
more primitive instructions and control flow)?  A: When the optimizer is
unlikely to be able to elide components of the exploded implementation.)

Then the biggest remaining task is dealing with the call instructions,
which are somewhat large still.  Probably they need exploding.  Once
that's done I think we can look to implementing a simple template method
JIT.  If the performance of that beats 2.2 (as I think it should!), then
it could be a good point to release 3.0 just like that.

Long-term I think it makes sense to do AOT compilation.  That also opens
the door for self-hosted adaptive optimization.  But, I don't know quite
how to get there and keep our portability story.  Do we keep bytecode
forever?  Only in some cases?  I don't know.  I certainly want to
minimize the amount of C that we have to maintain :)

Likewise in the medium term I think we should be actively moving library
code from C to Scheme.  With the move to intrinsics in the VM, the VM
itself is relying less and less on libguile, making the whole system
less coupled to libguile.  For Guile to prosper in the next 10 years, we
need to be able to retarget it, to WebAssembly and Racket-on-Chez and
PyPy and Graal and a whole host of other things.  The compiler is
producing low-level, fairly portable output currently, which is
appropriate to this goal, but we are limited by libguile.  So let's be
thinking about how to move much of the remaining 80KLOC of C over to
Scheme.  We won't be able to move all of it, but certainly we can move
some of it.

Happy hacking,


reply via email to

[Prev in Thread] Current Thread [Next in Thread]