Verifying links in a webpage or rss feed.

view story

http://www.linuxquestions.org – This a tip, not a question. It's incredibly hard to find how to do this using google, since apparently google's search engine can't tell...:banghead: Anyway this bash script takes a URL as an argument, downloads it, extracts all of the hyperlinks from it, and then uses wget in spider mode to check if the hyperlink is still good. Very useful for checking "links pages" and rss feeds. Code is heavily commented Code: #!/bin/bash # get the basename of the URL RSSFILE=${1##*/} #make sure it doesn't exist (by deleting it) rm $RSSFILE #download the URL wget $1 # takes the argume (HowTos)