[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: continuation passing in Emacs vs. JUST-THIS-ONE

From: Lynn Winebarger
Subject: Re: continuation passing in Emacs vs. JUST-THIS-ONE
Date: Mon, 17 Apr 2023 22:56:13 -0400

On Mon, Apr 17, 2023 at 3:50 PM Stefan Monnier <monnier@iro.umontreal.ca> wrote:
> > This whole thread seems to echo the difference between "stackless" and
> > "stackful" coroutines discussed in
> > https://nullprogram.com/blog/2019/03/10/ by the author of emacs-aio,
> > with generator-style rewriting corresponding to stackless and threads
> > to "stackful".  So when you say "save as much as threads do", I'm not
> > clear if you're talking about rewriting code to essentially create a
> > heap allocated version of the same information that a thread has in
> > the form of its stack, or something more limited like some particular
> > set of special bindings.
> Indeed to "save as much as threads do" we'd have to essentially create
> a heap allocated version of the same info.
> [ I don't think that's what we want.  ]

It sounds like you would end up with a user-implemented call/cc or
"spaghetti stack" construct, so I would agree.

> > It seems to me what one would really like is for primitives that might
> > block to just return a future that's treated like any other value,
> > except that "futurep" would return true and primitive operations would
> > implicitly wait on the futures in their arguments.
> I think experience shows that doing that implicitly everywhere is not
> a good idea, because it makes it all too easy to accidentally block
> waiting for a future.

I wrote that incorrectly - I meant that primitive operations would add
a continuation to the future and return a future for their result.
Basically, a computation would never block, it would just build
continuation trees (in the form of futures) and return to the
top-level.  Although that assumes the system would be able to allocate
those futures without blocking for GC work.

> Instead, you want to replace this "implicit" by a mechanism that is "as
> lightweight as possible" (so it's "almost implicit") and that makes it
> easy for the programmer to control whether the code should rather block
> for the future's result (e.g. `futur-wait`) or "delay itself" until
> after the future's completion (e.g. `future-let*`).

At some point in this thread you stated you weren't sure what the
right semantics are in terms of the information to save, etc.  I posed
this implicit semantics as a way to think about what "the right thing"
would be.  Would all operations preserve the same (lisp) machine
state, or would it differ depending on the nature of the operator?  [
is the kind of question it might be useful to work out in this thought
experiment ]

The way you've defined future-let, the variable being bound is a
future because you are constructing it as one, but it is still a
normal variable.

What if, instead, we define a "futur-abstraction" (lambda/futur (v)
body ...) in which v is treated as a future by default, and a
future-conditional form (if-available v ready-expr not-ready-expr)
with the obvious meaning.  If v appears as the argument to a
lambda/future function object it will be passed as is.  Otherwise, the
reference to v would be rewritten as (futur-wait v).  Some syntactic
sugar (futur-escape v) => (if-available v v) could be used to pass the
future to arbitrary functions.  Then futur-let and futur-let* could be
defined with the standard expansion with lambda replaced by

Otherwise, I'm not sure what the syntax really buys you.

> > I think that would provide the asynchronous but not concurrent
> > semantics you're talking about.
> FWIW, I'm in favor of both more concurrency and more parallelism.
> My earlier remark was simply pointing out that the design of `futur.el`
> is not trying to make Emacs faster.

It would be easier if elisp threads were orthogonal to system threads,
so that any elisp thread could be run on any available system thread.
Multiprocessing could be done by creating multiple lisp VMs in a
process (i.e. lisp VM orthogonal to a physical core), each with their
own heap and globals in addition to some shared heap with well-defined
synchronization.  The "global interpreter lock" would become a "lisp
machine lock", with (non-preemptive, one-shot continuation type) elisp
threads being local to the machine.  That seems to me the simplest way
to coherently extend the lisp semantics to multi-processing.  The
display would presumably have to exist in the shared space for
anything interesting to happen in terms of editing, but buffers could
be local to a particular lisp machine.

I thought I saw segmented stack allocation implemented in master last
year (by Mattias Engdegård?), but it doesn't appear to be there any
longer.  If that infrastructure were there, then it would seem user
space cooperative threading via one-shot continuations (+ trampolining
by kernel threads + user-space scheduling of user-space threads) would
be viable.


reply via email to

[Prev in Thread] Current Thread [Next in Thread]