[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: "concurrency" branch updated

From: Eli Zaretskii
Subject: Re: "concurrency" branch updated
Date: Wed, 04 Nov 2015 17:40:41 +0200

> From: Ken Raeburn <address@hidden>
> Date: Wed, 4 Nov 2015 04:20:37 -0500
> Cc: address@hidden,
>  address@hidden
> > On Nov 3, 2015, at 11:29, Eli Zaretskii <address@hidden> wrote:
> > 
> >> From: Ken Raeburn <address@hidden>
> >> Date: Tue, 3 Nov 2015 04:40:25 -0500
> >> Cc: "address@hidden discussions" <address@hidden>
> >> 
> >> At some point, we’ll want to demonstrate practical utility; not a trivial 
> >> demo program that displays a few messages, and nothing on the scale of 
> >> rewriting all of Gnus to be multithreaded, but somewhere in between.  I’m 
> >> not sure what would be a good example.  A version of generator.el that 
> >> uses 
> >> threads instead of the CPS transformation of everything is a possibility, 
> >> and it would probably simplify the writing and compiling of the 
> >> generators, 
> >> but it’d probably be more heavy-weight at run time.  Prefetching files’ 
> >> contents, or searching already-loaded files, while tags-query-replace 
> >> waits 
> >> for the user to respond to a prompt?  Improving fontification somehow?
> > 
> > Given that only one thread can run Lisp, is the above even possible?
> Implementing a generator with a thread seems somewhat straightforward, 
> needing 
> some sort of simple communication channel between the main thread and the 
> generator thread to pass “need next value” and “here’s the next value” 
> messages 
> back and forth; some extra work would be needed so that dropping all 
> references 
> to a generator makes everything, including the thread, go away.  Raising an 
> error in the thread’s “yield” calls may be a way to tackle that, though it 
> changes the semantics within the generator a bit.

Both the generator and its consumer run Lisp, so they can only run in
sequence.  How is this different from running them both in a single

> For prefetching file contents or searching existing buffers, the “main” 
> thread 
> can release the global lock when it prompts for the user’s input, and a 
> background thread can create buffers and load files, or search buffers for 
> patterns, tossing results onto some sort of queue or other data structure for 
> consumption by the main thread when it finishes with the file it’s on.

But then you are not talking about "normal" visiting of files or
searching of buffers.  You are talking about specialized features that
visit large number of files or are capable of somehow marking lots of
search hits for future presentation to users.  That is a far cry from
how we do this stuff currently -- you ask the user first, _then_ you
search or visit the file she asked for.

> refactoring insert-file-contents into a minimal file-reading routine
> that does no Lisp callbacks and another to deal with file name
> handlers and hooks and such could let us do the former on a helper
> thread and the latter (which could prompt the user) in the main
> thread at the expected time.

You are talking about some significant refactoring here, we currently
o all of this on the fly.  In any case, I can understand how this
would be a win with remote files, but with local files I'm quite sure
most of the time for inserting a file is taken by stuff like decoding
its contents, which we also do on the fly and which can call Lisp.
The I/O itself is quite fast nowadays, I think.  Just compare
insert-file-contents with insert-file-contents-literally for the same
large file, and see the big difference, especially if it includes some
non-ASCII text.

> Both of those examples are mainly about running some extra work in the 
> moments 
> while we’re waiting for the user to respond to a prompt.  We may be able to 
> do 
> the same with idle timers or other such mechanisms.  In cases like that, I 
> think it may come down to whether it’s easier and/or more maintainable to 
> write 
> code that cranks through the next step of an explicitly managed state 
> machine, 
> or structured code that maintains its state in program counters and variables 
> local to each stack frame… sometimes it’s one, sometimes it’s the other.
> As to fontification… I expect the code is pretty tight now, but maybe someone 
> who knows that code has some insight into whether we could do it better with 
> more CPU cores available.
> So… yeah, I think some of them are possible, but I’m not sure any of them 
> would 
> be a particularly good way to show off.  Got any suggestions?

I think features that use timers, and idle timers in particular, are
natural candidates for using threads.  Stealth font-lock comes to
mind, for example.

> >> Understood.  I think there may also be places where we could use threads 
> >> less visible to the Lisp world; TLS and redisplay come to mind.
> > 
> > Given the general model-view-controller design of Emacs and the
> > structure of its main loop, is making redisplay run in a separate
> > thread really viable?
> I’m not sure.  I’m not picturing redisplay running concurrently with Lisp so 
> much as redisplay on display 1 running concurrently with redisplay on display 
> 2, all happening at the same point in the code where we now run redisplay.  

Why is this use case important?  Do we really believe someone might
look at 2 different X displays at the same time?  Perhaps you meant
frames, not displays.  This could make a lot of sense, except that:

> (Ignoring for the moment the bits where redisplay can trigger Lisp 
> evaluation.)

We cannot really ignore that, because this feature is used a lot.
However, the global lock will probably solve this.

A more problematic issue is that the display engine currently assumes
that (almost) nothing happens with buffers and strings while it does
its job.  If redisplay is multithreaded, we need to make sure no other
thread that can touch Lisp data could run.

> I am making some assumptions that redisplay isn’t doing many costly
> calculations compared to the cost of pushing the bits to the glass.

That's not really true, although the display engine tries very hard to
be very fast.  But I've seen cycles taking 10 msec and even 50 msec
(which already borders on crossing the annoyance threshold).  So there
are some pretty costly calculations during redisplay, which is why the
display engine is heavily optimized to avoid them as much as possible.

> I suspect TLS is probably the more interesting case.

What do we have in TLS that we don't have in any network connection?

reply via email to

[Prev in Thread] Current Thread [Next in Thread]