[Top][All Lists]
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: url-retrieve-synchronously randomly fails on https URLs (patch inclu
From: |
Richard Stallman |
Subject: |
Re: url-retrieve-synchronously randomly fails on https URLs (patch included) |
Date: |
Mon, 29 Oct 2007 05:22:26 -0400 |
Unless I got it all wrong, this is much more complicated than the
original three-liner; I wonder if it is worth going this way instead of
a very simple patch that could still be buggy on rare occasions.
It is a bad thing for software to be unreliable. We should write it
so that it works every time, not just 99.9% of the time.
It is worth paying this small complexity to make it work right.
Besides, the code can be simpler.
+ ;; advance point to after all informational messages that
+ ;; `openssl s_client' and `gnutls' print
+ (lexical-let ((attempts tls-end-of-info-match-attempts)
+ (start-of-data nil))
Why not ordinary let?
+ (not (= 0 (if (> attempts 0) (decf attempts) -1)))
I think there should not be a limit -- it should just keep looping
until it finds what it is looking for.
+ (unless (accept-process-output process 1)
+ (sit-for 1)))
I think there is no need for sit-for. accept-process-output will wait.
+ (warn
+ "`open-tls-stream': Could not match `tls-end-of-info'
regexp in %d attempts."
+ tls-end-of-info-match-attempts)))
If you get rid of the counter, you won't need this either.