1

Wget and single page

view story
linux-howto

http://www.unix.com – Good evening to all!! I'm trying to become familiar with wget. I would like to download a page from Wikipedia with all images and CSSs but without going down to all links present in the page. It should be named index.html. I would like also to save it to /mnt/us inside a new folder. This is what I'm using now, but it saves all pages linked to the one I want to download. Code: PAGES="/mnt/us/" webpage="http://it.wikipedia.org/wiki/Robot" wget -e robots=off --quiet --mirror --page-requisites --no-parent --convert-links --adjust-extension -P "$PAGES&qu (HowTos)