[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [MIT-Scheme-devel] Faster ports.

From: Taylor R Campbell
Subject: Re: [MIT-Scheme-devel] Faster ports.
Date: Thu, 23 Jun 2011 02:41:11 +0000
User-agent: IMAIL/1.21; Edwin/3.116; MIT-Scheme/9.1

   Date: Wed, 22 Jun 2011 17:46:57 -0700
   From: Matt Birkholz <address@hidden>

   You would perhaps like to see read-octet expand into in-line code that
   expects a buffer immediately at hand (via port slots), and that only
   applies a port-type method when the buffer is empty/full?


                                                              With some
   care, the in-lined code could work equally well on internal OR
   external buffers.

I don't think `equally well' is true -- I don't expect the overhead of
branching and EXTERNAL-STRING-REFing to be negligible even if open-

   How about calling them "bytes"?  It is shorter than "octets", yes?

`Byte' won't make any sense when we get the PDP-10 port working again.
How about `u8' as a compromise?  It extends nicely to `s8', `u16le',

   If buffer transfers are all inlined via integrable calls to READ-OCTET
   etc., will our ports smoke?  Did you have something else in mind?

Well, there are a lot of problems with ports and I/O in general in MIT

- Octet-by-octet reads and writes with ports are out-of-line and go
through multiple layers, so processing binary data, and sometimes
processing text too, is very slow unless you work with channels and do
your own buffering.

- The port concept isn't a very clear abstraction: sometimes a port is
a source of octets (represented by ISO-8859-1 characters), sometimes
it's a sink of octets, sometimes it's a source of Unicode code points,
sometimes it's a sink of them, sometimes it's actually a terminal with
lots of interesting state associated with it, &c.  The concept of a
blocking mode is broken; operations, not ports, should be marked as
blocking or non-blocking.

- There is no support for memory-mapped I/O or scatter-gather I/O, so
any bulk I/O will involve too much copying.  For example, transmitting
a local file to a socket should be a matter of mapping chunks of the
file into memory and then writing those chunks.  If the operating
system DTRTs, this will essentially copy bits straight from the disk
to the NIC.  But in MIT Scheme, we need to read it from the disk (or
disk buffer cache, anyway) into the heap in the userland for the input
buffer, copy it to somewhere else in the heap for the intermediate
buffer, and copy it still another place in the heap for the output
buffer, and finally copy it into the network stack's buffers.

- The whole thread I/O event system is a kludge that served its
purpose of making Edwin interactive while you're waiting for an answer
from the REPL, and doesn't work very well beyond that.

At <> is a tarball of some
(extremely) brief notes to myself and draft code I started tinkering
with.  (Apologies to the mailing list archive...  With any luck, this
will turn into something more than some practically useless notes and
draft code, some day.)

I compared ports with (CHAR->INTEGER (READ-CHAR)) to either that code
or something essentially equivalent to it, doing octet-by-octet binary
data processing, and I don't remember exactly what the improvement in
performance was, but it was enormous -- probably a decimal order of
magnitude, at least.

Here's a very rough sketch of the plan I have had in mind for some
time, but haven't gotten enough round tuits to sit down and make

1. Expose something similar to the channel abstraction, and in
particular, on Unix, expose it as file descriptors, enough that you
can use the (an) FFI to do system calls on them that MIT Scheme
doesn't already know about.  Add primitives for memory-mapped I/O on
channels with external strings, separate blocking and non-blocking I/O
operations, perhaps asynchronous I/O operations, &c.

2. Add sources and sinks, like in the mit-io.tgz archive above.  These
support fast buffered (and unbuffered, as a special case) blocking
binary I/O.  Adapt a bunch of the Unicode stuff, and the genio
machinery, to work on top of these.

3. Maybe integrate Scheme-CML into the system, as a stable API for
communication between threads and between Scheme and the OS, whose
implementation may need to be ripped out and replaced.  There remain a
few bits of it that I'm unsure about, though.  For context: it's at

4. Totally revamp the whole thread event system, like I've been
threatening to Chris for years, for various reasons.  This will
require some changes to the way subprocesses are handled internally,
too, which is very, very hairy, and will require me to sit down for at
least a week to puzzle over.

5. Change Edwin to have a more understandable event loop, and to do
memory-mapped I/O and avoid line ending translation or anything,
instead preferring to interpret buffer contents (which are octet
sequences) on the fly variously as ISO-8859-1/Unix, UTF-8/DOS, or what
have you.  We'll need to do this in order to support Unicode sensibly

But enough rambling.  I've been trying to adhere to a discipline of
less talk, more code, and I haven't worked up the courage to dive into
all these problems, so I've been keeping pretty quiet about them...

reply via email to

[Prev in Thread] Current Thread [Next in Thread]