[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
bug#5372: Calling url-retrieve-synchronously in a timer
From: |
Lennart Borgman |
Subject: |
bug#5372: Calling url-retrieve-synchronously in a timer |
Date: |
Wed, 13 Jan 2010 17:21:20 +0100 |
On Wed, Jan 13, 2010 at 5:02 PM, Chong Yidong <cyd@stupidchicken.com> wrote:
> Stefan Monnier <monnier@iro.umontreal.ca> writes:
>
>>> It looks like url-retrieve-synchronously is not finished when run in a
>>> timer. Is that expected behaviour?
>>> It looks like it hangs in accept-process-output.
>>
>> It's not expected (for me anyway), but I'm not completely
>> surprised either. Furthermore, it doesn't look like a good thing to
>> do anyway: timer code should not run for an indefinite amount of time,
>> so you should probably use url-retrieve instead.
>
> Could this be a manifestation of Bug#5359, "process-send-region hangs
> ntEmacs if region >64K"?
>
> http://debbugs.gnu.org/cgi/bugreport.cgi?bug=5359
The buffer retrieved in my case has point-max 53784.
I tested with a smaller web page and that actually worked. But to my
surprise the original web page also works today.
I do not know what is different. However I have seen several times
that Emacs on w32 can get into bad states. This happens when some kind
of resource is exhausted. The one I can see easily is GDI.
Maybe some of the w32 api calls does not check the return values as they should?