[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

non-blocking i/o in guile

From: Andy Wingo
Subject: non-blocking i/o in guile
Date: Tue, 17 May 2016 22:28:06 +0200
User-agent: Gnus/5.13 (Gnus v5.13) Emacs/24.5 (gnu/linux)

Hi :)

This is a mail for people interested in the implementation of
non-blocking buffered I/O.  If you're mostly interested from the user
side, there's probably not much of interest here.  Here goes nothing :)

I have been reworking Guile's I/O routines.  In Guile master, when you
implement a port, you provide read and/or write functions.  These
functions have a C prototype like this:

   size_t (*read) (SCM port, SCM dst, size_t start, size_t count)
   size_t (*write) (SCM port, SCM src, size_t start, size_t count)

"dst" and "src" are bytevectors.  The read function fills the
bytevector, returning the number of bytes filled, and the write function
empties the bytevector, returning the number of bytes written.

If there is an error when reading or writing, then the read/write
functions should throw an error.  Otherwise the semantics are that it's
a blocking operation: each read or write should cause a nonzero number
of bytes to be read or written.  Reading 0 bytes means there is EOF.
Writing 0 bytes is probably an error, though who knows.

Besides the bit about exceptions, this is basically the semantics of
read(2) and write(2).

Internally to Guile, callers of read functions are generally happy with
fewer than COUNT bytes.  Callers of write functions will generally loop
until COUNT bytes are written.

Now, how to make this fit with non-blocking I/O?  Initially I thought
that it would be sufficient to add a kind of sentinel return value that
would indicate the equivalent of EWOULDBLOCK.  Then we'd have to have
some interface to get a file descriptor or other waitable (more on this
later) from the port.  That waitable could be added to a poll set, or we
could layer blocking I/O on top of nonblocking I/O by calling poll(2) in
a loop with the read or write.

This would be pretty gnarly but I think it could have worked -- except
two things: blocking local file access, and Windows.  I explain.

It turns out that even if you set a file descriptor to nonblocking, and
you use only nonblocking operations on it, if that file descriptor is
backed by a local file system, operations on it will block.  The FD will
always poll as readable/writable.  Linux has this problem, and async I/O
(AIO) doesn't help; glibc implements AIO in user-space with a new thread
per AIO operation.  Terrible.  FreeBSD does the same but with kernel
threads, AFAIU.

The upshot is that to reliably do non-blocking I/O over local files you
need to use thread pools.

I was willing to punt on non-blocking local I/O (and I still am), but
then Windows.  Windows doesn't have "poll".  Instead what they have is
async "I/O completion ports".  It's interesting, because it's an
edge-triggered system rather than a level-triggered system: async
read/write operations trigger async completions, instead of the POSIX
case where you poll on an FD to see when you could operate on it without

I find to be the only piece of
sanity in the entire Internet when it comes to async I/O between POSIX
and Windows.  Go ahead and read it -- it's very clear.

What the IOCP thing means is that the original design of polling on fd's
wasn't going to work.  We need to instead have a way for a caller to say
"I support non-blocking I/O and so if you would block on your I/O,
please don't, return me a promise or something instead".  Happily it
would be possible for this interface to hide a thread-pool for local
files, if that were a thing.

My proposal is to change the prototype of the read/write operations to be:

   size_t (*read) (SCM port, SCM dst, size_t start, size_t count, waitable_t 
   size_t (*write) (SCM port, SCM src, size_t start, size_t count, waitable_t 

We can change API/ABI as the port implementation API/ABI is changing in
master anyway relative to 2.0.  See NEWS.  The semantics would be that
if the user provides no WAITABLE pointer, then the read or write is
blocking.  If the user provides a WAITABLE pointer, the read or write
*may* be async.  waitable_t or scm_t_waitable or whatever we call it is
a platform-specific define that may either be "int" or "HANDLE".  If the
return value is 0 then the caller should check the WAITABLE pointer to
see if there is an async completion (FD or handle) to wait on.

Guile's C code will never provide a WAITABLE value, as the whole point
of doing non-blocking I/O is to suspend the Scheme coroutine while the
I/O is happening, and you can't do that from C.  But from Scheme we'll
somehow make it make sense... not sure how.  Maybe an extra box
argument.  Dunno.

I would really appreciate reviews from people that have done
high-performance non-blocking I/O with systems like Java NIO or libuv.
Your thoughts are very welcome.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]