emacs-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: continuation passing in Emacs vs. JUST-THIS-ONE


From: Stefan Monnier
Subject: Re: continuation passing in Emacs vs. JUST-THIS-ONE
Date: Mon, 10 Apr 2023 22:53:31 -0400
User-agent: Gnus/5.13 (Gnus v5.13)

>> IOW, your `await` is completely different from Javascript's `await`.
> It depends what do you mean exactly and why do you bring javascript as
> relevant here.

Because that's the kind of model `futur.el` is trying to implement
(where `futur-let*` corresponds loosely to `await`, just without the
auto-CPS-conversion).

> Also Emacs does not have such sophisticated event loop like javascript.

Not sure what you mean by that.

>> And the use `await` above means that your Emacs will block while waiting
>> for one result.  `futur-let*` instead lets you compose async operations
>> without blocking Emacs, and thus works more like Javascript's `await`.
> Blocking the current thread for one result is fine, because all the
> futures already run in other threads in "background" so there is nothing
> else to do.

You can't know that.  There can be other async processes whose
filters should be run, timers to be executed, other threads to run, ...

> If you mean that you want to use the editor at the same time, just run
> the example in another thread.

The idea is to use `futur.el` *instead of* threads.

> But then you have to look for the result in the *Message* buffer.
> If I actually want to get the same behaviour as C-x C-e
> (eval-last-sexp) then I want await to block Emacs; and this is what
> await at top-level does.

Indeed, there are various cases where you do want to wait (which is why
I provide `futur-wait`).  But its use should be fairly limited (to the
"top level").

> No, the iter case does map directly to futures:
>
> (await
>  (async-iter
>    (let ((a (async-iter
>               (message "a1")
>               (await-iter (sleep-iter3 3))
>               (message "a2")
>               1))
>          (b (async-iter
>               (message "b1")
>               (let ((c (async-iter
>                          (message "c1")
>                          (await-iter (sleep-iter3 3))
>                          (message "c2")
>                          2)))
>                 (message "b2")
>                 (+ 3 (await-iter c))))))
>      (+ (await-iter a) (await-iter b)))))

I must say I don't understand this example: in which sense is it using
"iter"?  I don't see any `iter-yield`.

> The difference with for example javascript is that I drive the polling
> loop explicitly here, while javascript queues the continuations in the
> event loop implicitly.

`futur.el` also "queues the continuations in the event loop".

>>> Calling await immediately after async is useless (simply use blocking
>>> call).  The point of future is to make the distance between those calls
>>> as big as possible so that the sum of times in the sequential case is
>>> replaced with max of times in the parallel case.
>> You're looking for parallelism.  I'm not.
> What do you mean exactly?

That `futur.el` is not primarily concerned with allowing you to run
several subprocesses to exploit your multiple cores.  It's instead
primarily concerned with making it easier to write asynchronous code.

One of the intended use case would be for completion tables to return
futures (which, in many cases, will have already been computed
synchronously, but not always).

> I am asking because:
>
> https://wiki.haskell.org/Parallelism_vs._Concurrency
>
>    Warning: Not all programmers agree on the meaning of the terms
>    'parallelism' and 'concurrency'. They may define them in different
>    ways or do not distinguish them at all.

Yet I have never heard of anyone disagree with the definitions given at
the beginning of that very same page.  More specifically those who may
disagree are those who didn't know there was a distinction :-)

> But it seems that you insist on composing promises sequentially:

No, I'm merely making it easy to do that.

> Also futur.el does seems to run callbacks synchronously:

I don't think so: it runs them via `funcall-later`.

> In this javascript example, a and b appear to run in parallel (shall I
> say concurrently?):
>
> function sleep(sec) {
>   return new Promise(resolve => {
>     setTimeout(() => {resolve(sec);}, sec * 1000);
>   });
> }
> async function test() {
>   const a = sleep(9);
>   const b = sleep(8);
>   const z = await a + await b;
>   console.log(z);
> }
> test();
>
> Here the console log will show 17 after 9sec.
> It will not show 17 after 17sec.
>
> Can futur.el do that?

Of course.  You could do something like

      (futur-let*
          ((a (futur-let* ((_ <- (futur-process-make
                                 :command '("sleep" "9"))))
                 9))
           (b (futur-let* ((_ <- (futur-process-make
                                 :command '("sleep" "8"))))
                 8))
           (a-val <- a)
           (b-val <- b))
        (message "Result = %s" (+ a-val b-val))))

> Sure, if the consumer does not really need the value of the result of
> the asynchronous computation, just plug in a callback that does
> something later.

How do you plug in a callback in code A which waits for code B to finish
when code A doesn't know if code B is doing its computation
synchronously or not, and if B does it asynchronously, A doesn't know if
it's done via timers, via some kind of hooks, via a subprocess which
will end when the computation is done, via a subprocess which will be
kept around for other purposes after the computation is done, etc... ?

That's what `futur.el` is about: abstracting away those differences
between the uniform API of a "future".

> In your example, you immediately return a lie and then
> fix it later asynchronously from a callback.

Yes.  That's not due to `futur.el`, tho: it's due to the conflicting
requirements of jit-lock and the need to make a costly computation in
a subprocess in order to know what needs to be highlighted and how.

> Maybe it is confusing because you describe what the producer does, but
> not what the consumer does.  And in your example, it does not matter
> what value the consumer receives because the callback will be able to
> fix it later.  In your example, there is no consumer that needs the
> value of the future.

Yes, there is a consumer which will "backpatch" the highlighting.
But since it's done behind the back of jit-lock, we need to write it
by hand.

>> When writing the code by hand, for the cases targeted by my library, you
>> *have* to use process sentinels.  `futur.el` just provides a fairly thin
>> layer on top.  Lisp can't just "figure those out" for you.
>
> async-process uses process sentinel but this is just an implementation
> detail specific to asynchronous processes.  It does not have to leak out
> of the future/async/await "abstraction".

Indeed, the users of the future won't know whether it's waiting for some
process to complete or for something else.  They'll just call
`futur-let*` or `futur-wait` or somesuch.

> futur.el is completely broken,

Indeed, it's work in progress, not at all usable as of now.

> I think that your confusion is caused by the decision that
> futur-process-make yields exit code.  That is wrong, exit code is
> logically not the resolved value (promise resolution), it indicates
> failure (promise rejection).

Not necessarily, it all depends on what the process is doing.

Similarly the "intended return value" of a process will depend on what
the process does.  In some cases it will be the stdout, but I see no
reason to restrict my fundamental function to such a choice.  It's easy
to build on top of `futur-process-make` a higher-level function which
returns the stdout as the result of the future.


        Stefan




reply via email to

[Prev in Thread] Current Thread [Next in Thread]