[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Bug-wget] request for help with wget (crawling search results of a webs
[Bug-wget] request for help with wget (crawling search results of a website)
Sun, 3 Nov 2013 09:13:59 +0100
Dear mailing List members,
According to the website http://www.gnu.org/software/wget/ it is ok to
write emails with help requests to this mailing list. I have the following
I am trying to crawl the search results of a news website using *wget*.
The name of the website is *www.voanews.com <http://www.voanews.com>*.
After typing in my *search keyword* and clicking search on the website, it
proceeds to the results. Then i can specify a *"to" and a "from"-date* and
hit search again.
After this the URL becomes:
and the actual content of the results is what i want to download.
To achieve this I created the following wget-command:
wget --reject=js,txt,gif,jpeg,jpg \
--recursive --level=2 \
Unfortunately, the crawler doesn't download the search results. It only
gets into the upper link bar, which contains the "Home,USA,Africa,Asia,..."
links and saves the articles they link to.
*It seems like he crawler doesn't check the search result links at all*.
*What am I doing wrong and how can I modify the wget command to download
the results search list links (and of course the sites they link to) only ?*
Thank you for any help...
- [Bug-wget] request for help with wget (crawling search results of a website),
Altug Tekin <=