Wait the specified number of seconds between the retrievals. Use of this option is recommended, as
it lightens the server load by making the requests less frequent. Instead of in seconds, the time
can be specified in minutes using the "m" suffix, in hours using "h" suffix, or in days using "d"
Specifying a large value for this option is useful if the network or the destination host is down, so
that Wget can wait long enough to reasonably expect the network error to be fixed before the retry.
The waiting interval specified by this function is influenced by "--random-wait", which see.
Some web sites may perform log analysis to identify retrieval programs such as Wget by looking for
statistically significant similarities in the time between requests. This option causes the time
between requests to vary between 0.5 and 1.5 * wait seconds, where wait was specified using the
--wait option, in order to mask Wget's presence from such analysis.
A 2001 article in a publication devoted to development on a popular consumer platform provided code
to perform this analysis on the fly. Its author suggested blocking at the class C address level to
ensure automated retrieval programs were blocked despite changing DHCP-supplied addresses.
The --random-wait option was inspired by this ill-advised recommendation to block many unrelated
users from a web site due to the actions of one.
This option causes Wget to download all the files that are necessary to properly display a given HTML
page. This includes such things as inlined images, sounds, and referenced stylesheets.
Ordinarily, when downloading a single HTML page, any requisite documents that may be needed to
display it properly are not downloaded. Using -r together with -l can help, but since Wget does not
ordinarily distinguish between external and inlined documents, one is generally left with "leaf
documents" that are missing their requisites.
For instance, say document 1.html contains an "<IMG>" tag referencing 1.gif and an "<A>" tag pointing
to external document 2.html. Say that 2.html is similar but that its image is 2.gif and it links to
3.html. Say this continues up to some arbitrarily high number.
If one executes the command:
wget -r -l 2 http://<site>/1.html
then 1.html, 1.gif, 2.html, 2.gif, and 3.html will be downloaded. As you can see, 3.html is without
its requisite 3.gif because Wget is simply counting the number of hops (up to 2) away from 1.html in
order to determine where to stop the recursion. However, with this command:
wget -r -l 2 -p http://<site>/1.html
all the above files and 3.html's requisite 3.gif will be downloaded. Similarly,
wget -r -l 1 -p http://<site>/1.html
will cause 1.html, 1.gif, 2.html, and 2.gif to be downloaded. One might think that:
wget -r -l 0 -p http://<site>/1.html
would download just 1.html and 1.gif, but unfortunately this is not the case, because -l 0 is
equivalent to -l inf---that is, infinite recursion. To download a single HTML page (or a handful of
them, all specified on the command-line or in a -i URL input file) and its (or their) requisites,
simply leave off -r and -l:
wget -p http://<site>/1.html
Note that Wget will behave as if -r had been specified, but only that single page and its requisites
will be downloaded. Links from that page to external documents will not be followed. Actually, to
download a single page and all its requisites (even if they exist on separate websites), and make
sure the lot displays properly locally, this author likes to use a few options in addition to -p:
wget -E -H -k -K -p http://<site>/<document>
To finish off this topic, it's worth knowing that Wget's idea of an external document link is any URL
specified in an "<A>" tag, an "<AREA>" tag, or a "<LINK>" tag other than "<LINK REL="stylesheet">".
After the download is complete, convert the links in the document to make them suitable for local
viewing. This affects not only the visible hyperlinks, but any part of the document that links to
external content, such as embedded images, links to style sheets, hyperlinks to non-HTML content,
Each link will be changed in one of the two ways:
The links to files that have been downloaded by Wget will be changed to refer to the file they
point to as a relative link.
Example: if the downloaded file /foo/doc.html links to /bar/img.gif, also downloaded, then the
link in doc.html will be modified to point to ../bar/img.gif. This kind of transformation works
reliably for arbitrary combinations of directories.
The links to files that have not been downloaded by Wget will be changed to include host name and
absolute path of the location they point to.
Example: if the downloaded file /foo/doc.html links to /bar/img.gif (or to ../bar/img.gif), then
the link in doc.html will be modified to point to http://hostname/bar/img.gif.
Because of this, local browsing works reliably: if a linked file was downloaded, the link will refer
to its local name; if it was not downloaded, the link will refer to its full Internet address rather
than presenting a broken link. The fact that the former links are converted to relative links
ensures that you can move the downloaded hierarchy to another directory.
Note that only at the end of the download can Wget know which links have been downloaded. Because of
that, the work done by -k will be performed at the end of all the downloads.