emacs-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Fwd: HTTP redirects make url-retrieve-synchronously asynchronous


From: Stefan Monnier
Subject: Re: Fwd: HTTP redirects make url-retrieve-synchronously asynchronous
Date: Fri, 20 Jan 2006 16:56:55 -0500
User-agent: Gnus/5.11 (Gnus v5.11) Emacs/22.0.50 (gnu/linux)

>     I've looked at it and understand the problem, but I can't think of a
>     quickfix, and the real fix will take some work (and enough changes that
>     it'll risk introducing bugs that'll need more testing than I can do
>     myself).

> Could you explain the issue to us?  Maybe someone will have an idea.

Here is what I believe to be the relevant information:

1. when an http server returns a redirect, url-http.el currently calls
   url-retrieve to retrieve the target of the redirect (i.e. transparently
   follow the redirection).

2. url-retrieve takes a url and a callack function and returns a new buffer
   into which the requested "page" will be asynchronously inserted.  When
   the buffer is complete, it calls the callback function (in that buffer).

3. some backends (at least url-http, maybe others) sometimes decide not to
   call the callback, presumably as a way to signal an error (the operation
   can't be completed so the callback can't be called, basically).  This is
   a bug, but I don't know of anyone who's tried to tackle it yet.

4. url-retrieve-synchronously is implemented on top of url-retrieve by
   busy-looping with accept-process-input waiting for a variable to be set to
   t by the callback function.  Now, because of number 3 above
   url-retrieve-synchronously can't assume that the callback will eventually
   be called, so it also stops the busy-waiting if it notices that there's
   no more process running in the buffer that url-retrive returned.

So when a redirect is encountered, the inner call to url-retrieve creates
a brand new buffer, different from the one returned by the original
url-retrieve call, and the subsequent async process runs in that buffer and
the callback will be called in *that* buffer as well.

So url-retrieve-synchronously gets all confused: in the buffer in which it
expects the output, after the redirect there's no more process running
(it's running the newly generated buffer), so it stops busy-waiting
and returns, thinking the download has completed whereas it's actually still
going on, just in another buffer.

So there are fundamentally two bugs in the code:

1. sometimes the callback doesn't get called.
2. sometimes the callback gets called in another buffer than the one
   returned by url-retrieve.

I think the first is due to a bug in the API because there's no documented
way for a backend to indicate to the callback function that the
operation failed.

And the second is an internal bug, but which I think is due to a bug in the
internal API (the one used between the generic part of the URL lib and each
backend) where the url-<foo> function necessarily returns a new buffer.

They (and url-retrieve) should either take an optional destination buffer as
parameter or they should simply not return any buffer at all and the
destination buffer should only be made known when the callback is called.
This second option is simpler but would cause problems for code that wants to
start processing data before it's fully downloaded.


        Stefan




reply via email to

[Prev in Thread] Current Thread [Next in Thread]