[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: srfi-18 and the vm

From: Neil Jerram
Subject: Re: srfi-18 and the vm
Date: Sun, 31 May 2009 00:07:14 +0100
User-agent: Gnus/5.11 (Gnus v5.11) Emacs/22.2 (gnu/linux)

Andy Wingo <address@hidden> writes:

> Hi Neil,
> On Mon 25 May 2009 23:57, Neil Jerram <address@hidden> writes:
>> address@hidden (Ludovic Courtès) writes:
>>> Andy Wingo <address@hidden> writes:
>>>> For loading uncompiled scripts, things will be slower, unless your
>>>> modules #:use-syntax some other transformer. I don't know where the
>>>> tradeoff is between the increased expansion speed due to compilation and
>>>> slowdown due to a complete codewalk, but it's certainly there.
>>> Yes.  Likewise, it may be reasonable to assume from now on that most of
>>> the code will be compiled.  For instance, an uncompiled script may just
>>> be a small code snipped that uses mostly compiled code.
>> It seems to me that once we have a completely working compiler, we
>> need to ask if there are any circumstances left where it is better to
>> use the current interpreter instead.
> In the short term (within the next year or so), I would imagine that
> ceval/deval would be faster than an eval written in Scheme -- though I
> do not know.

I wasn't thinking of an eval written in Scheme.  I was assuming the
other option that you mention below, i.e. eval becomes compile on the
fly followed by VM execution.

> On the other hand, an eval written in Scheme would allow for
> tail-recursive calls between the evaluator and the VM.
> Another option, besides an eval in Scheme, is replacing the evaluator
> with the compiler. One could compile on the fly and run the compiled
> code from memory, or cache to the filesystem, either alongside the .scm
> files or in a ~/.guile-comp-cache/ or something.
> But compilation does take some time.

I guess this is the key point.  Even when the compiler has itself been

So would it be correct to say, based on your performance observations
so far, that the time needed to compile and then VM-execute a piece of
code is greater than the time needed to interpret the same piece of

In other words, that ahead-of-time compilation is helpful,
performance-wise, but that just-in-time compilation is net negative?

(Perhaps that is a ridiculous question even to ask...  It may be well
known that just-in-time compilation is always net negative if you only
consider one execution of the code concerned.  I'm afraid I'm not
familiar enough with the CS background on this.)

> It seems clear we still need an eval in C, at least to bootstrap Guile.

Yes, good point, I had forgotten that!

>> Do we already have performance measurements, and are those recorded /
>> summarized somewhere?
> We don't have very systematic ones. I was just running some of the
> feeley benchmarks again, and it looked to me that the VM's speed is
> about 3 or 4 times the speed of ceval in 1.9, but I should test with
> benchmarks that run for only 1s or so, and measure compilation time, and
> test against 1.8 too.

So, extremely roughly, that would mean that just-in-time compilation
could only be net-zero or net-positive if the complexity of the
compilation code was less than 3 or 4 times the complexity of the code
being compiled.  Which for a short piece of code is unlikely.

>>>> OTOH I would suspect that we can implement some kind of just-in-time
>>>> compilation -- essentially for each use-modules we can check to see if
>>>> the module is compiled, and if not just compile it then and there. It
>>>> would be a little slow the first time, but after that it would load much
>>>> faster, even faster than before. Python does this. We could add a guile
>>>> --no-comp option to disable it.
>>> I don't like this idea because it implies implicitly letting Guile
>>> fiddle with the user's file system.
>> I don't see why that should be.  Isn't it possible to read a .scm
>> file, compile its contents, and hold the compiled programs in memory?
> Yes, compile-and-load, from (system base compile). But you have to redo
> the compilation the next time the file is loaded, of course.
> Incidentally, ikarus had a similar discussion recently:

I see; good references - there's no need for us to have the same
conversation again!  I think I agree with Aziz's conclusion -
i.e. "auto-caching" should be disabled by default.

The thread suggested to me that Ikarus has to compile the code that it
reads before it can execute it; i.e. that it doesn't retain an
interpretation option.  Is that correct?  If so, Guile might
reasonably make slightly different decisions - e.g. to interpret a
module when using it for the first time, and also to start compiling
it on another thread, with the module's procedures being replaced
one-by-one by VM programs as the compilation progresses.


reply via email to

[Prev in Thread] Current Thread [Next in Thread]