[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
wget2 | crawl for urls without sending any http requests? (#554)
From: |
field required (@windowswithoutborders) |
Subject: |
wget2 | crawl for urls without sending any http requests? (#554) |
Date: |
Tue, 01 Jun 2021 03:12:58 +0000 |
field required created an issue: https://gitlab.com/gnuwget/wget2/-/issues/554
wget2 spider is amazing. On my line it can near instantly fetch all URLs on a
given page, which is exactly what I'm looking for, a speedy URL fetcher. I was
wondering if it was possible to cease all operation beyond the "adding url"
portion. That is to say, I do not require sending any HTTP requests, checking
if the remote file exists and waiting for an HTTP response, I would like to use
wget2 solely as a URL fetcher. I can accomplish this to some extent using the
gnu coreutil command 'timeout', but that can be unreliable if there are any
hiccups on the server or on my end. Thanks for any suggestions.
--
Reply to this email directly or view it on GitLab:
https://gitlab.com/gnuwget/wget2/-/issues/554
You're receiving this email because of your account on gitlab.com.
[Prev in Thread] |
Current Thread |
[Next in Thread] |
- wget2 | crawl for urls without sending any http requests? (#554),
field required (@windowswithoutborders) <=