[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: "concurrency" branch updated

From: Ken Raeburn
Subject: Re: "concurrency" branch updated
Date: Wed, 4 Nov 2015 14:48:12 -0500

>> Implementing a generator with a thread seems somewhat straightforward, 
>> needing 
>> some sort of simple communication channel between the main thread and the 
>> generator thread to pass “need next value” and “here’s the next value” 
>> messages 
>> back and forth; some extra work would be needed so that dropping all 
>> references 
>> to a generator makes everything, including the thread, go away.  Raising an 
>> error in the thread’s “yield” calls may be a way to tackle that, though it 
>> changes the semantics within the generator a bit.
> Both the generator and its consumer run Lisp, so they can only run in
> sequence.  How is this different from running them both in a single
> thread?

In this case, it’s about how you'd write the generator code.  While the 
multithreaded version would have other issues (like having to properly quit the 
new thread when we’re done with the generator), it wouldn’t require writing 
everything using special macros to do CPS transformations.  If I want to yield 
values from within a function invoked via mapcar, I don’t have to write an 
iter-mapcar macro to turn everything inside-out under the covers.

>> For prefetching file contents or searching existing buffers, the “main” 
>> thread 
>> can release the global lock when it prompts for the user’s input, and a 
>> background thread can create buffers and load files, or search buffers for 
>> patterns, tossing results onto some sort of queue or other data structure 
>> for 
>> consumption by the main thread when it finishes with the file it’s on.
> But then you are not talking about "normal" visiting of files or
> searching of buffers.  You are talking about specialized features that
> visit large number of files or are capable of somehow marking lots of
> search hits for future presentation to users.  That is a far cry from
> how we do this stuff currently -- you ask the user first, _then_ you
> search or visit the file she asked for.

I haven’t used tags-query-replace in a while, but I don’t recall it asking me 
if I wanted to visit each file.  But yes, I’m thinking of larger operations 
where the next stage is fairly predictable, and probably does no harm if we 
optimistically start it early.  Smaller stuff may be good too (I hope), but I’d 
guess there’s a greater chance the thread-switching overhead could become an 
issue; I could well be overestimating it.  And some of the simpler ones, like 
highlighting all regexp matches in the visible part of the current buffer while 
doing a search, are already done, though we could look at how the code would 
compare if rewritten to use threads.

> You are talking about some significant refactoring here, we currently
> o all of this on the fly.  In any case, I can understand how this
> would be a win with remote files, but with local files I'm quite sure
> most of the time for inserting a file is taken by stuff like decoding
> its contents, which we also do on the fly and which can call Lisp.
> The I/O itself is quite fast nowadays, I think.  Just compare
> insert-file-contents with insert-file-contents-literally for the same
> large file, and see the big difference, especially if it includes some
> non-ASCII text.

I haven’t done that test, but I have used an NFS server that got slow at times. 
 And NFS from Amazon virtual machines back to my office, which is always a bit 
slow.  And sshfs, which can be slow too.  None of which Emacs can do anything 
about directly.

>> So… yeah, I think some of them are possible, but I’m not sure any of them 
>> would 
>> be a particularly good way to show off.  Got any suggestions?
> I think features that use timers, and idle timers in particular, are
> natural candidates for using threads.  Stealth font-lock comes to
> mind, for example.

That’s what I was thinking of when I mentioned fontification.  I hope thread 
switches are fast enough.

>> I’m not sure.  I’m not picturing redisplay running concurrently with Lisp so 
>> much as redisplay on display 1 running concurrently with redisplay on 
>> display 
>> 2, all happening at the same point in the code where we now run redisplay.  
> Why is this use case important?  Do we really believe someone might
> look at 2 different X displays at the same time?

No, but occasionally redisplay still needs to talk to multiple displays and get 
responses back, even with the work you and Stefan have done.  Fortunately, it’s 
much more rare now, and the color-handling work I did may help further.  And 
people using multiple displays *and* slow display connections like me are 
probably not very common among the user base.  So it’s an area where threads 
might help, but maybe not a terribly important one for the project.

>> I am making some assumptions that redisplay isn’t doing many costly
>> calculations compared to the cost of pushing the bits to the glass.
> That's not really true, although the display engine tries very hard to
> be very fast.  But I've seen cycles taking 10 msec and even 50 msec
> (which already borders on crossing the annoyance threshold).  So there
> are some pretty costly calculations during redisplay, which is why the
> display engine is heavily optimized to avoid them as much as possible.

In that case, maybe it’s still worth considering after all.

>> I suspect TLS is probably the more interesting case.
> What do we have in TLS that we don't have in any network connection?

Encryption, optional compression, possibly key renegotiation, possible receipt 
of incomplete messages that can’t yet be decrypted and thus can’t give us any 
new data bytes.  The thread(s) running user Lisp code needn’t spend any cycles 
on these things.


reply via email to

[Prev in Thread] Current Thread [Next in Thread]