-#, --progress-bar
Make curl display progress information as a progress bar instead of the
|
-0, --http1.0
(HTTP) Forces curl to issue its requests using HTTP 1.0 instead of using its internally preferred:
HTTP 1.1.
|
-1, --tlsv1
(SSL) Forces curl to use TLS version 1 when negotiating with a remote TLS server.
|
-2, --sslv2
(SSL) Forces curl to use SSL version 2 when negotiating with a remote SSL server.
|
-3, --sslv3
(SSL) Forces curl to use SSL version 3 when negotiating with a remote SSL server.
|
-4, --ipv4
If libcurl is capable of resolving an address to multiple IP versions (which it is if it is
IPv6-capable), this option tells libcurl to resolve names to IPv4 addresses only.
|
-6, --ipv6
If libcurl is capable of resolving an address to multiple IP versions (which it is if it is
IPv6-capable), this option tells libcurl to resolve names to IPv6 addresses only. default
statistics.
|
-a, --append
(FTP/SFTP) When used in an upload, this will tell curl to append to the target file instead of
overwriting it. If the file doesn't exist, it will be created. Note that this flag is ignored by
some SSH servers (including OpenSSH).
|
-A, --user-agent <agent string>
(HTTP) Specify the User-Agent string to send to the HTTP server. Some badly done CGIs fail if this
field isn't set to "Mozilla/4.0". To encode blanks in the string, surround the string with single
quote marks. This can also be set with the -H, --header option of course.
If this option is set more than once, the last one will be the one that's used.
|
--anyauth
(HTTP) Tells curl to figure out authentication method by itself, and use the most secure one the
remote site claims to support. This is done by first doing a request and checking the response-
headers, thus possibly inducing an extra network round-trip. This is used instead of setting a
specific authentication method, which you can do with --basic, --digest, --ntlm, and --negotiate.
Note that using --anyauth is not recommended if you do uploads from stdin, since it may require
data to be sent twice and then the client must be able to rewind. If the need should arise when
uploading from stdin, the upload operation will fail.
|
-b, --cookie <name=data>
(HTTP) Pass the data to the HTTP server as a cookie. It is supposedly the data previously received
from the server in a "Set-Cookie:" line. The data should be in the format "NAME1=VALUE1;
NAME2=VALUE2".
If no '=' symbol is used in the line, it is treated as a filename to use to read previously stored
cookie lines from, which should be used in this session if they match. Using this method also
activates the "cookie parser" which will make curl record incoming cookies too, which may be handy
if you're using this in combination with the -L, --location option. The file format of the file to
read cookies from should be plain HTTP headers or the Netscape/Mozilla cookie file format.
NOTE that the file specified with -b, --cookie is only used as input. No cookies will be stored in
the file. To store cookies, use the -c, --cookie-jar option or you could even save the HTTP
headers to a file using -D, --dump-header!
If this option is set more than once, the last one will be the one that's used.
|
-B, --use-ascii
Enable ASCII transfer when using FTP or LDAP. For FTP, this can also be enforced by using an URL
that ends with ";type=A". This option causes data sent to stdout to be in text mode for win32
systems.
|
--basic
(HTTP) Tells curl to use HTTP Basic authentication. This is the default and this option is usually
pointless, unless you use it to override a previously set option that sets a different
authentication method (such as --ntlm, --digest, or --negotiate).
|
-c, --cookie-jar <file name>
Specify to which file you want curl to write all cookies after a completed operation. Curl writes
all cookies previously read from a specified file as well as all cookies received from remote
server(s). If no cookies are known, no file will be written. The file will be written using the
Netscape cookie file format. If you set the file name to a single dash, "-", the cookies will be
written to stdout.
This command line option will activate the cookie engine that makes curl record and use cookies.
Another way to activate it is to use the -b, --cookie option.
If the cookie jar can't be created or written to, the whole curl operation won't fail or even
report an error clearly. Using -v will get a warning displayed, but that is the only visible
feedback you get about this possibly lethal situation.
If this option is used several times, the last specified file name will be used.
|
-C, --continue-at <offset>
Continue/Resume a previous file transfer at the given offset. The given offset is the exact number
of bytes that will be skipped, counting from the beginning of the source file before it is
transferred to the destination. If used with uploads, the FTP server command SIZE will not be
used by curl.
Use "-C -" to tell curl to automatically find out where/how to resume the transfer. It then uses
the given output/input files to figure that out.
If this option is used several times, the last one will be used.
|
--ciphers <list of ciphers>
(SSL) Specifies which ciphers to use in the connection. The list of ciphers must specify valid
ciphers. Read up on SSL cipher list details on this URL:
http://www.openssl.org/docs/apps/ciphers.html
NSS ciphers are done differently than OpenSSL and GnuTLS. The full list of NSS ciphers is in the
NSSCipherSuite entry at this URL: http://directory.fedora.redhat.com/docs/mod_nss.html#Directives
If this option is used several times, the last one will override the others.
|
--compressed
(HTTP) Request a compressed response using one of the algorithms libcurl supports, and save the
uncompressed document. If this option is used and the server sends an unsupported encoding, curl
will report an error.
|
--connect-timeout <seconds>
Maximum time in seconds that you allow the connection to the server to take. This only limits the
connection phase, once curl has connected this option is of no more use. See also the -m, --max-
time option.
If this option is used several times, the last one will be used.
|
--create-dirs
When used in conjunction with the -o option, curl will create the necessary local directory
hierarchy as needed. This option creates the dirs mentioned with the -o option, nothing else. If
the -o file name uses no dir or if the dirs it mentions already exist, no dir will be created.
To create remote directories when using FTP or SFTP, try --ftp-create-dirs.
|
--crlf (FTP) Convert LF to CRLF in upload. Useful for MVS (OS/390).
|
--crlfile <file>
(HTTPS/FTPS) Provide a file using PEM format with a Certificate Revocation List that may specify
peer certificates that are to be considered revoked.
If this option is used several times, the last one will be used.
(Added in 7.19.7)
|
-d, --data <data>
(HTTP) Sends the specified data in a POST request to the HTTP server, in the same way that a
browser does when a user has filled in an HTML form and presses the submit button. This will cause
curl to pass the data to the server using the content-type application/x-www-form-urlencoded.
Compare to -F, --form.
|
-d, --data is the same as --data-ascii. To post data purely binary, you should instead use the
--data-binary option. To URL-encode the value of a form field you may use --data-urlencode.
If any of these options is used more than once on the same command line, the data pieces specified
will be merged together with a separating &-symbol. Thus, using '-d name=daniel -d skill=lousy'
would generate a post chunk that looks like 'name=daniel&skill=lousy'.
If you start the data with the letter @, the rest should be a file name to read the data from, or
- if you want curl to read the data from stdin. The contents of the file must already be URL-
encoded. Multiple files can also be specified. Posting data from a file named 'foobar' would thus
be done with --data @foobar.
|
-D, --dump-header <file>
Write the protocol headers to the specified file.
This option is handy to use when you want to store the headers that a HTTP site sends to you.
Cookies from the headers could then be read in a second curl invocation by using the -b, --cookie
option! The -c, --cookie-jar option is however a better way to store cookies.
When used in FTP, the FTP server response lines are considered being "headers" and thus are saved
there.
If this option is used several times, the last one will be used. IP "--data-ascii <data>" See -d,
--data.
|
--data-binary <data>
(HTTP) This posts data exactly as specified with no extra processing whatsoever.
If you start the data with the letter @, the rest should be a filename. Data is posted in a
similar manner as --data-ascii does, except that newlines are preserved and conversions are never
done.
If this option is used several times, the ones following the first will append data as described
in -d, --data.
|
--data-urlencode <data>
(HTTP) This posts data, similar to the other --data options with the exception that this performs
URL-encoding. (Added in 7.18.0)
To be CGI-compliant, the <data> part should begin with a name followed by a separator and a
content specification. The <data> part can be passed to curl using one of the following syntaxes:
content
This will make curl URL-encode the content and pass that on. Just be careful so that the
content doesn't contain any = or @ symbols, as that will then make the syntax match one of
the other cases below!
|
name=content
This will make curl URL-encode the content part and pass that on. Note that the name part
is expected to be URL-encoded already.
@filename
This will make curl load data from the given file (including any newlines), URL-encode that
data and pass it on in the POST.
name@filename
This will make curl load data from the given file (including any newlines), URL-encode that
data and pass it on in the POST. The name part gets an equal sign appended, resulting in
name=urlencoded-file-content. Note that the name is expected to be URL-encoded already.
|
--delegation LEVEL
Set LEVEL to tell the server what it is allowed to delegate when it comes to user credentials.
Used with GSS/kerberos.
none Don't allow any delegation.
policy Delegates if and only if the OK-AS-DELEGATE flag is set in the Kerberos service ticket,
which is a matter of realm policy.
always Unconditionally allow the server to delegate.
|
--digest
(HTTP) Enables HTTP Digest authentication. This is a authentication that prevents the password
from being sent over the wire in clear text. Use this in combination with the normal -u, --user
option to set user name and password. See also --ntlm, --negotiate and --anyauth for related
options.
If this option is used several times, the following occurrences make no difference.
|
--disable-eprt
(FTP) Tell curl to disable the use of the EPRT and LPRT commands when doing active FTP transfers.
Curl will normally always first attempt to use EPRT, then LPRT before using PORT, but with this
option, it will use PORT right away. EPRT and LPRT are extensions to the original FTP protocol,
and may not work on all servers, but they enable more functionality in a better way than the
traditional PORT command.
|
--eprt can be used to explicitly enable EPRT again and --no-eprt is an alias for --disable-eprt.
Disabling EPRT only changes the active behavior. If you want to switch to passive mode you need to
not use -P, --ftp-port or force it with --ftp-pasv.
|
--disable-epsv
(FTP) Tell curl to disable the use of the EPSV command when doing passive FTP transfers. Curl will
normally always first attempt to use EPSV before PASV, but with this option, it will not try using
EPSV.
|
--epsv can be used to explicitly enable EPRT again and --no-epsv is an alias for --disable-epsv.
Disabling EPSV only changes the passive behavior. If you want to switch to active mode you need to
use -P, --ftp-port.
|
-e, --referer <URL>
(HTTP) Sends the "Referer Page" information to the HTTP server. This can also be set with the -H,
--header flag of course. When used with -L, --location you can append ";auto" to the --referer
URL to make curl automatically set the previous URL when it follows a Location: header. The
";auto" string can be used alone, even if you don't set an initial --referer.
If this option is used several times, the last one will be used.
|
-E, --cert <certificate[:password]>
(SSL) Tells curl to use the specified client certificate file when getting a file with HTTPS, FTPS
or another SSL-based protocol. The certificate must be in PEM format. If the optional password
isn't specified, it will be queried for on the terminal. Note that this option assumes a
"certificate" file that is the private key and the private certificate concatenated! See --cert
and --key to specify them independently.
If curl is built against the NSS SSL library then this option can tell curl the nickname of the
certificate to use within the NSS database defined by the environment variable SSL_DIR (or by
default /etc/pki/nssdb). If the NSS PEM PKCS#11 module (libnsspem.so) is available then PEM files
may be loaded. If you want to use a file from the current directory, please precede it with "./"
prefix, in order to avoid confusion with a nickname.
If this option is used several times, the last one will be used.
|
--engine <name>
Select the OpenSSL crypto engine to use for cipher operations. Use --engine list to print a list
of build-time supported engines. Note that not all (or none) of the engines may be available at
run-time.
|
--environment
(RISC OS ONLY) Sets a range of environment variables, using the names the -w option supports, to
allow easier extraction of useful information after having run curl.
|
--egd-file <file>
(SSL) Specify the path name to the Entropy Gathering Daemon socket. The socket is used to seed the
random engine for SSL connections. See also the --random-file option.
|
--cert-type <type>
(SSL) Tells curl what certificate type the provided certificate is in. PEM, DER and ENG are
recognized types. If not specified, PEM is assumed.
If this option is used several times, the last one will be used.
|
--cacert <CA certificate>
(SSL) Tells curl to use the specified certificate file to verify the peer. The file may contain
multiple CA certificates. The certificate(s) must be in PEM format. Normally curl is built to use
a default file for this, so this option is typically used to alter that default file.
curl recognizes the environment variable named 'CURL_CA_BUNDLE' if it is set, and uses the given
path as a path to a CA cert bundle. This option overrides that variable.
The windows version of curl will automatically look for a CA certs file named ´curl-ca-
bundle.crt´, either in the same directory as curl.exe, or in the Current Working Directory, or in
any folder along your PATH.
If curl is built against the NSS SSL library then this option tells curl the nickname of the CA
certificate to use within the NSS database defined by the environment variable SSL_DIR (or by
default /etc/pki/nssdb). If the NSS PEM PKCS#11 module (libnsspem.so) is available then PEM files
may be loaded.
If this option is used several times, the last one will be used.
|
--capath <CA certificate directory>
(SSL) Tells curl to use the specified certificate directory to verify the peer. The certificates
must be in PEM format, and if curl is built against OpenSSL, the directory must have been
processed using the c_rehash utility supplied with OpenSSL. Using --capath can allow OpenSSL-
powered curl to make SSL-connections much more efficiently than using --cacert if the --cacert
file contains many CA certificates.
If this option is used several times, the last one will be used.
|
-f, --fail
(HTTP) Fail silently (no output at all) on server errors. This is mostly done to better enable
scripts etc to better deal with failed attempts. In normal cases when a HTTP server fails to
deliver a document, it returns an HTML document stating so (which often also describes why and
more). This flag will prevent curl from outputting that and return error 22.
This method is not fail-safe and there are occasions where non-successful response codes will slip
through, especially when authentication is involved (response codes 401 and 407).
|
-F, --form <name=content>
(HTTP) This lets curl emulate a filled-in form in which a user has pressed the submit button. This
causes curl to POST data using the Content-Type multipart/form-data according to RFC 2388. This
enables uploading of binary files etc. To force the 'content' part to be a file, prefix the file
name with an @ sign. To just get the content part from a file, prefix the file name with the
symbol <. The difference between @ and < is then that @ makes a file get attached in the post as a
file upload, while the < makes a text field and just get the contents for that text field from a
file.
|
--ftp-account [data]
(FTP) When an FTP server asks for "account data" after user name and password has been provided,
this data is sent off using the ACCT command. (Added in 7.13.0)
If this option is used twice, the second will override the previous use.
|
--ftp-alternative-to-user <command>
(FTP) If authenticating with the USER and PASS commands fails, send this command. When connecting
to Tumbleweed's Secure Transport server over FTPS using a client certificate, using "SITE AUTH"
will tell the server to retrieve the username from the certificate. (Added in 7.15.5)
|
--ftp-create-dirs
(FTP/SFTP) When an FTP or SFTP URL/operation uses a path that doesn't currently exist on the
server, the standard behavior of curl is to fail. Using this option, curl will instead attempt to
create missing directories.
|
--ftp-method [method]
(FTP) Control what method curl should use to reach a file on a FTP(S) server. The method argument
should be one of the following alternatives:
multicwd
curl does a single CWD operation for each path part in the given URL. For deep hierarchies
this means very many commands. This is how RFC 1738 says it should be done. This is the
default but the slowest behavior.
nocwd curl does no CWD at all. curl will do SIZE, RETR, STOR etc and give a full path to the
server for all these commands. This is the fastest behavior.
singlecwd
curl does one CWD with the full target directory and then operates on the file "normally"
(like in the multicwd case). This is somewhat more standards compliant than 'nocwd' but
without the full penalty of 'multicwd'.
(Added in 7.15.1)
|
--ftp-pasv
(FTP) Use passive mode for the data connection. Passive is the internal default behavior, but
using this option can be used to override a previous -P/-ftp-port option. (Added in 7.11.0)
If this option is used several times, the following occurrences make no difference. Undoing an
enforced passive really isn't doable but you must then instead enforce the correct -P, --ftp-port
again.
Passive mode means that curl will try the EPSV command first and then PASV, unless --disable-epsv
is used.
|
--ftp-skip-pasv-ip
(FTP) Tell curl to not use the IP address the server suggests in its response to curl's PASV
command when curl connects the data connection. Instead curl will re-use the same IP address it
already uses for the control connection. (Added in 7.14.2)
This option has no effect if PORT, EPRT or EPSV is used instead of PASV.
|
--ftp-pret
(FTP) Tell curl to send a PRET command before PASV (and EPSV). Certain FTP servers, mainly drftpd,
require this non-standard command for directory listings as well as up and downloads in PASV mode.
(Added in 7.20.x)
|
--ftp-ssl-ccc
(FTP) Use CCC (Clear Command Channel) Shuts down the SSL/TLS layer after authenticating. The rest
of the control channel communication will be unencrypted. This allows NAT routers to follow the
FTP transaction. The default mode is passive. See --ftp-ssl-ccc-mode for other modes. (Added in
7.16.1)
|
--ftp-ssl-ccc-mode [active/passive]
(FTP) Use CCC (Clear Command Channel) Sets the CCC mode. The passive mode will not initiate the
shutdown, but instead wait for the server to do it, and will not reply to the shutdown from the
server. The active mode initiates the shutdown and waits for a reply from the server. (Added in
7.16.2)
|
--ftp-ssl-control
(FTP) Require SSL/TLS for the FTP login, clear for transfer. Allows secure authentication, but
non-encrypted data transfers for efficiency. Fails the transfer if the server doesn't support
SSL/TLS. (Added in 7.16.0) that can still be used but will be removed in a future version.
|
--form-string <name=string>
(HTTP) Similar to --form except that the value string for the named parameter is used literally.
Leading '@' and '<' characters, and the ';type=' string in the value have no special meaning. Use
this in preference to --form if there's any possibility that the string value may accidentally
trigger the '@' or '<' features of --form.
|
-g, --globoff
This option switches off the "URL globbing parser". When you set this option, you can specify URLs
that contain the letters {}[] without having them being interpreted by curl itself. Note that
these letters are not normal legal URL contents but they should be encoded according to the URI
standard.
|
-G, --get
When used, this option will make all data specified with -d, --data or --data-binary to be used in
a HTTP GET request instead of the POST request that otherwise would be used. The data will be
appended to the URL with a '?' separator.
If used in combination with -I, the POST data will instead be appended to the URL with a HEAD
request.
If this option is used several times, the following occurrences make no difference. This is
because undoing a GET doesn't make sense, but you should then instead enforce the alternative
method you prefer.
|
-H, --header <header>
(HTTP) Extra header to use when getting a web page. You may specify any number of extra headers.
Note that if you should add a custom header that has the same name as one of the internal ones
curl would use, your externally set header will be used instead of the internal one. This allows
you to make even trickier stuff than curl would normally do. You should not replace internally set
headers without knowing perfectly well what you're doing. Remove an internal header by giving a
replacement without content on the right side of the colon, as in: -H "Host:".
curl will make sure that each header you add/replace is sent with the proper end-of-line marker,
you should thus not add that as a part of the header content: do not add newlines or carriage
returns, they will only mess things up for you.
See also the -A, --user-agent and -e, --referer options.
This option can be used multiple times to add/replace/remove multiple headers.
|
--hostpubmd5 <md5>
Pass a string containing 32 hexadecimal digits. The string should be the 128 bit MD5 checksum of
the remote host's public key, curl will refuse the connection with the host unless the md5sums
match. This option is only for SCP and SFTP transfers. (Added in 7.17.1)
|
--ignore-content-length
(HTTP) Ignore the Content-Length header. This is particularly useful for servers running Apache
1.x, which will report incorrect Content-Length for files larger than 2 gigabytes.
|
-i, --include
(HTTP) Include the HTTP-header in the output. The HTTP-header includes things like server-name,
date of the document, HTTP-version and more...
|
-I, --head
(HTTP/FTP/FILE) Fetch the HTTP-header only! HTTP-servers feature the command HEAD which this uses
to get nothing but the header of a document. When used on a FTP or FILE file, curl displays the
file size and last modification time only.
|
--interface <name>
Perform an operation using a specified interface. You can enter interface name, IP address or host
name. An example could look like:
|
-j, --junk-session-cookies
(HTTP) When curl is told to read cookies from a given file, this option will make it discard all
"session cookies". This will basically have the same effect as if a new session is started.
Typical browsers always discard session cookies when they're closed down.
|
-J, --remote-header-name
(HTTP) This option tells the -O, --remote-name option to use the server-specified Content-
Disposition filename instead of extracting a filename from the URL.
|
-k, --insecure
(SSL) This option explicitly allows curl to perform "insecure" SSL connections and transfers. All
SSL connections are attempted to be made secure by using the CA certificate bundle installed by
default. This makes all connections considered "insecure" fail unless -k, --insecure is used.
See this online resource for further details: http://curl.haxx.se/docs/sslcerts.html
|
-K, --config <config file>
Specify which config file to read curl arguments from. The config file is a text file in which
command line arguments can be written which then will be used as if they were written on the
actual command line. Options and their parameters must be specified on the same config file line,
separated by whitespace, colon, the equals sign or any combination thereof (however, the preferred
separator is the equals sign). If the parameter is to contain whitespace, the parameter must be
enclosed within quotes. Within double quotes, the following escape sequences are available: \\,
\", \t, \n, \r and \v. A backslash preceding any other letter is ignored. If the first column of a
config line is a '#' character, the rest of the line will be treated as a comment. Only write one
option per physical line in the config file.
Specify the filename to -K, --config as '-' to make curl read the file from stdin.
|
--keepalive-time <seconds>
This option sets the time a connection needs to remain idle before sending keepalive probes and
the time between individual keepalive probes. It is currently effective on operating systems
offering the TCP_KEEPIDLE and TCP_KEEPINTVL socket options (meaning Linux, recent AIX, HP-UX and
more). This option has no effect if --no-keepalive is used. (Added in 7.18.0)
If this option is used multiple times, the last occurrence sets the amount.
|
--key <key>
(SSL/SSH) Private key file name. Allows you to provide your private key in this separate file.
If this option is used several times, the last one will be used.
|
--key-type <type>
(SSL) Private key file type. Specify which type your --key provided private key is. DER, PEM, and
ENG are supported. If not specified, PEM is assumed.
If this option is used several times, the last one will be used.
|
--krb <level>
(FTP) Enable Kerberos authentication and use. The level must be entered and should be one of
'clear', 'safe', 'confidential', or 'private'. Should you use a level that is not one of these,
'private' will instead be used.
This option requires a library built with kerberos4 or GSSAPI (GSS-Negotiate) support. This is not
very common. Use -V, --version to see if your curl supports it.
If this option is used several times, the last one will be used.
|
-l, --list-only
(FTP) When listing an FTP directory, this switch forces a name-only view. Especially useful if
you want to machine-parse the contents of an FTP directory since the normal directory view doesn't
use a standard look or format.
This option causes an FTP NLST command to be sent. Some FTP servers list only files in their
response to NLST; they do not include subdirectories and symbolic links.
|
-L, --location
(HTTP/HTTPS) If the server reports that the requested page has moved to a different location
(indicated with a Location: header and a 3XX response code), this option will make curl redo the
request on the new place. If used together with -i, --include or -I, --head, headers from all
requested pages will be shown. When authentication is used, curl only sends its credentials to the
initial host. If a redirect takes curl to a different host, it won't be able to intercept the
user+password. See also --location-trusted on how to change this. You can limit the amount of
redirects to follow by using the --max-redirs option.
When curl follows a redirect and the request is not a plain GET (for example POST or PUT), it will
do the following request with a GET if the HTTP response was 301, 302, or 303. If the response
code was any other 3xx code, curl will re-send the following request using the same unmodified
method.
|
--libcurl <file>
Append this option to any ordinary curl command line, and you will get a libcurl-using source code
written to the file that does the equivalent of what your command-line operation does!
NOTE: this does not properly support -F and the sending of multipart formposts, so in those cases
the output program will be missing necessary calls to curl_formadd(3), and possibly more.
If this option is used several times, the last given file name will be used. (Added in 7.16.1)
|
--limit-rate <speed>
Specify the maximum transfer rate you want curl to use. This feature is useful if you have a
limited pipe and you'd like your transfer not to use your entire bandwidth.
The given speed is measured in bytes/second, unless a suffix is appended. Appending 'k' or 'K'
will count the number as kilobytes, 'm' or M' makes it megabytes, while 'g' or 'G' makes it
gigabytes. Examples: 200K, 3m and 1G.
The given rate is the average speed counted during the entire transfer. It means that curl might
use higher transfer speeds in short bursts, but over time it uses no more than the given rate.
|
--local-port <num>[-num]
Set a preferred number or range of local port numbers to use for the connection(s). Note that
port numbers by nature are a scarce resource that will be busy at times so setting this range to
something too narrow might cause unnecessary connection setup failures. (Added in 7.15.2)
|
--location-trusted
(HTTP/HTTPS) Like -L, --location, but will allow sending the name + password to all hosts that the
site may redirect to. This may or may not introduce a security breach if the site redirects you to
a site to which you'll send your authentication info (which is plaintext in the case of HTTP Basic
authentication).
|
-m, --max-time <seconds>
Maximum time in seconds that you allow the whole operation to take. This is useful for preventing
your batch jobs from hanging for hours due to slow networks or links going down. See also the
--connect-timeout option.
If this option is used several times, the last one will be used.
|
--mail-from <address>
(SMTP) Specify a single address that the given mail should get sent from.
(Added in 7.20.0)
|
--max-filesize <bytes>
Specify the maximum size (in bytes) of a file to download. If the file requested is larger than
this value, the transfer will not start and curl will return with exit code 63.
NOTE: The file size is not always known prior to download, and for such files this option has no
effect even if the file transfer ends up being larger than this given limit. This concerns both
FTP and HTTP transfers.
|
--mail-rcpt <address>
(SMTP) Specify a single address that the given mail should get sent to. This option can be used
multiple times to specify many recipients.
(Added in 7.20.0)
|
--max-redirs <num>
Set maximum number of redirection-followings allowed. If -L, --location is used, this option can
be used to prevent curl from following redirections "in absurdum". By default, the limit is set to
50 redirections. Set this option to -1 to make it limitless.
If this option is used several times, the last one will be used.
|
-n, --netrc
Makes curl scan the .netrc (_netrc on Windows) file in the user's home directory for login name
and password. This is typically used for FTP on UNIX. If used with HTTP, curl will enable user
authentication. See netrc(4) or ftp(1) for details on the file format. Curl will not complain if
that file doesn't have the right permissions (it should not be either world- or group-readable).
The environment variable "HOME" is used to find the home directory.
A quick and very simple example of how to setup a .netrc to allow curl to FTP to the machine
host.domain.com with user name 'myself' and password 'secret' should look similar to:
machine host.domain.com login myself password secret
|
-N, --no-buffer
Disables the buffering of the output stream. In normal work situations, curl will use a standard
buffered output stream that will have the effect that it will output the data in chunks, not
necessarily exactly when the data arrives. Using this option will disable that buffering.
Note that this is the negated option name documented. You can thus use --buffer to enforce the
buffering.
|
--netrc-file
This option is similar to --netrc, except that you provide the path (absolute or relative) to the
netrc file that Curl should use. You can only specify one netrc file per invocation. If several
--netrc-file options are provided, only the last one will be used. (Added in 7.21.5)
|
--netrc-optional
Very similar to --netrc, but this option makes the .netrc usage optional and not mandatory as the
--netrc option does.
|
--negotiate
(HTTP) Enables GSS-Negotiate authentication. The GSS-Negotiate method was designed by Microsoft
and is used in their web applications. It is primarily meant as a support for Kerberos5
authentication but may be also used along with another authentication method. For more information
see IETF draft draft-brezak-spnego-http-04.txt.
If you want to enable Negotiate for your proxy authentication, then use --proxy-negotiate.
This option requires a library built with GSSAPI support. This is not very common. Use -V,
--version to see if your version supports GSS-Negotiate.
|
--no-keepalive
Disables the use of keepalive messages on the TCP connection, as by default curl enables them.
|
--no-sessionid
(SSL) Disable curl's use of SSL session-ID caching. By default all transfers are done using the
cache. Note that while nothing should ever get hurt by attempting to reuse SSL session-IDs, there
seem to be broken SSL implementations in the wild that may require you to disable this in order
for you to succeed. (Added in 7.16.0)
|
--noproxy <no-proxy-list>
Comma-separated list of hosts which do not use a proxy, if one is specified. The only wildcard is
a single * character, which matches all hosts, and effectively disables the proxy. Each name in
this list is matched as either a domain which contains the hostname, or the hostname itself. For
example, local.com would match local.com, local.com:80, and www.local.com, but not
www.notlocal.com. (Added in 7.19.4).
|
--ntlm (HTTP) Enables NTLM authentication. The NTLM authentication method was designed by Microsoft and
is used by IIS web servers. It is a proprietary protocol, reverse-engineered by clever people and
implemented in curl based on their efforts. This kind of behavior should not be endorsed, you
should encourage everyone who uses NTLM to switch to a public and documented authentication method
instead, such as Digest.
If you want to enable NTLM for your proxy authentication, then use --proxy-ntlm.
This option requires a library built with SSL support. Use -V, --version to see if your curl
supports NTLM.
If this option is used several times, the following occurrences make no difference.
|
-o, --output <file>
Write output to <file> instead of stdout. If you are using {} or [] to fetch multiple documents,
you can use '#' followed by a number in the <file> specifier. That variable will be replaced with
the current string for the URL being fetched. Like in:
curl http://{one,two}.site.com -o "file_#1.txt"
or use several variables like:
curl http://{site,host}.host[1-5].com -o "#1_#2"
You may use this option as many times as the number of URLs you have.
|
-O, --remote-name
Write output to a local file named like the remote file we get. (Only the file part of the remote
file is used, the path is cut off.)
The remote file name to use for saving is extracted from the given URL, nothing else.
Consequentially, the file will be saved in the current working directory. If you want the file
saved in a different directory, make sure you change current working directory before you invoke
curl with the -O, --remote-name flag!
You may use this option as many times as the number of URLs you have.
|
-p, --proxytunnel
When an HTTP proxy is used (-x, --proxy), this option will cause non-HTTP protocols to attempt to
tunnel through the proxy instead of merely using it to do HTTP-like operations. The tunnel
approach is made with the HTTP proxy CONNECT request and requires that the proxy allows direct
connect to the remote port number curl wants to tunnel through to.
|
-P, --ftp-port <address>
(FTP) Reverses the default initiator/listener roles when connecting with FTP. This switch makes
curl use active mode. In practice, curl then tells the server to connect back to the client's
specified address and port, while passive mode asks the server to setup an IP address and port for
it to connect to. <address> should be one of:
|
--pass <phrase>
(SSL/SSH) Passphrase for the private key
If this option is used several times, the last one will be used.
|
--post301
Tells curl to respect RFC 2616/10.3.2 and not convert POST requests into GET requests when
following a 301 redirection. The non-RFC behaviour is ubiquitous in web browsers, so curl does the
conversion by default to maintain consistency. However, a server may require a POST to remain a
POST after such a redirection. This option is meaningful only when using -L, --location (Added in
7.17.1)
|
--post302
Tells curl to respect RFC 2616/10.3.2 and not convert POST requests into GET requests when
following a 302 redirection. The non-RFC behaviour is ubiquitous in web browsers, so curl does the
conversion by default to maintain consistency. However, a server may require a POST to remain a
POST after such a redirection. This option is meaningful only when using -L, --location (Added in
7.19.1)
|
--proto <protocols>
Tells curl to use the listed protocols for its initial retrieval. Protocols are evaluated left to
right, are comma separated, and are each a protocol name or 'all', optionally prefixed by zero or
more modifiers. Available modifiers are:
+ Permit this protocol in addition to protocols already permitted (this is the default if no
modifier is used).
- Deny this protocol, removing it from the list of protocols already permitted.
= Permit only this protocol (ignoring the list already permitted), though subject to later
modification by subsequent entries in the comma separated list.
For example:
|
--proto -ftps uses the default protocols, but disables ftps
|
--proto -all,https,+http
only enables http and https
|
--proto =http,https
also only enables http and https
Unknown protocols produce a warning. This allows scripts to safely rely on being able to disable
potentially dangerous protocols, without relying upon support for that protocol being built into
curl to avoid an error.
This option can be used multiple times, in which case the effect is the same as concatenating the
protocols into one instance of the option.
(Added in 7.20.2)
|
--proto-redir <protocols>
Tells curl to use the listed protocols after a redirect. See --proto for how protocols are
represented.
(Added in 7.20.2)
|
--proxy-anyauth
Tells curl to pick a suitable authentication method when communicating with the given proxy. This
might cause an extra request/response round-trip. (Added in 7.13.2)
|
--proxy-basic
Tells curl to use HTTP Basic authentication when communicating with the given proxy. Use --basic
for enabling HTTP Basic with a remote host. Basic is the default authentication method curl uses
with proxies.
|
--proxy-digest
Tells curl to use HTTP Digest authentication when communicating with the given proxy. Use --digest
for enabling HTTP Digest with a remote host.
|
--proxy-negotiate
Tells curl to use HTTP Negotiate authentication when communicating with the given proxy. Use
--negotiate for enabling HTTP Negotiate with a remote host. (Added in 7.17.1)
|
--proxy-ntlm
Tells curl to use HTTP NTLM authentication when communicating with the given proxy. Use --ntlm for
enabling NTLM with a remote host.
|
--pubkey <key>
(SSH) Public key file name. Allows you to provide your public key in this separate file.
If this option is used several times, the last one will be used.
|
-q If used as the first parameter on the command line, the curlrc config file will not be read and
used. See the -K, --config for details on the default config file search path.
|
-Q, --quote <command>
(FTP/SFTP) Send an arbitrary command to the remote FTP or SFTP server. Quote commands are sent
BEFORE the transfer takes place (just after the initial PWD command in an FTP transfer, to be
exact). To make commands take place after a successful transfer, prefix them with a dash '-'. To
make commands be sent after libcurl has changed the working directory, just before the transfer
command(s), prefix the command with a '+' (this is only supported for FTP). You may specify any
number of commands. If the server returns failure for one of the commands, the entire operation
will be aborted. You must send syntactically correct FTP commands as RFC 959 defines to FTP
servers, or one of the commands listed below to SFTP servers. This option can be used multiple
times. When speaking to a FTP server, prefix the command with an asterisk (*) to make libcurl
continue even if the command fails as by default curl will stop at first failure.
|
-r, --range <range>
(HTTP/FTP/SFTP/FILE) Retrieve a byte range (i.e a partial document) from a HTTP/1.1, FTP or SFTP
server or a local FILE. Ranges can be specified in a number of ways.
0-499 specifies the first 500 bytes
500-999 specifies the second 500 bytes
|
-500 specifies the last 500 bytes
|
-R, --remote-time
When used, this will make libcurl attempt to figure out the timestamp of the remote file, and if
that is available make the local file get that same timestamp.
|
--random-file <file>
(SSL) Specify the path name to file containing what will be considered as random data. The data is
used to seed the random engine for SSL connections. See also the --egd-file option.
|
--raw When used, it disables all internal HTTP decoding of content or transfer encodings and instead
makes them passed on unaltered, raw. (Added in 7.16.2)
|
--remote-name-all
This option changes the default action for all given URLs to be dealt with as if -O, --remote-name
were used for each one. So if you want to disable that for a specific URL after --remote-name-all
has been used, you must use "-o -" or --no-remote-name. (Added in 7.19.0)
|
--resolve <host:port:address>
Provide a custom address for a specific host and port pair. Using this, you can make the curl
requests(s) use a specified address and prevent the otherwise normally resolved address to be
used. Consider it a sort of /etc/hosts alternative provided on the command line. The port number
should be the number used for the specific protocol the host will be used for. It means you need
several entries if you want to provide address for the same host but different ports.
This option can be used many times to add many host names to resolve.
(Added in 7.21.3)
|
--retry <num>
If a transient error is returned when curl tries to perform a transfer, it will retry this number
of times before giving up. Setting the number to 0 makes curl do no retries (which is the
default). Transient error means either: a timeout, an FTP 4xx response code or an HTTP 5xx
response code.
When curl is about to retry a transfer, it will first wait one second and then for all forthcoming
retries it will double the waiting time until it reaches 10 minutes which then will be the delay
between the rest of the retries. By using --retry-delay you disable this exponential backoff
algorithm. See also --retry-max-time to limit the total time allowed for retries. (Added in
7.12.3)
If this option is used multiple times, the last occurrence decide the amount.
|
--retry-delay <seconds>
Make curl sleep this amount of time before each retry when a transfer has failed with a transient
error (it changes the default backoff time algorithm between retries). This option is only
interesting if --retry is also used. Setting this delay to zero will make curl use the default
backoff time. (Added in 7.12.3)
If this option is used multiple times, the last occurrence determines the amount.
|
--retry-max-time <seconds>
The retry timer is reset before the first transfer attempt. Retries will be done as usual (see
--retry) as long as the timer hasn't reached this given limit. Notice that if the timer hasn't
reached the limit, the request will be made and while performing, it may take longer than this
given time period. To limit a single request´s maximum time, use -m, --max-time. Set this option
to zero to not timeout retries. (Added in 7.12.3)
If this option is used multiple times, the last occurrence determines the amount.
|
-s, --silent
Silent or quiet mode. Don't show progress meter or error messages. Makes Curl mute.
|
-S, --show-error
When used with -s it makes curl show an error message if it fails.
|
--ssl (FTP, POP3, IMAP, SMTP) Try to use SSL/TLS for the connection. Reverts to a non-secure connection
if the server doesn't support SSL/TLS. See also --ftp-ssl-control and --ssl-reqd for different
levels of encryption required. (Added in 7.20.0)
This option was formerly known as --ftp-ssl (Added in 7.11.0). That option name can still be used
but will be removed in a future version.
|
--ssl-reqd
(FTP, POP3, IMAP, SMTP) Require SSL/TLS for the connection. Terminates the connection if the
server doesn't support SSL/TLS. (Added in 7.20.0)
This option was formerly known as --ftp-ssl-reqd (added in 7.15.5). That option name can still be
used but will be removed in a future version.
|
--socks4 <host[:port]>
Use the specified SOCKS4 proxy. If the port number is not specified, it is assumed at port 1080.
(Added in 7.15.2)
This option overrides any previous use of -x, --proxy, as they are mutually exclusive.
Since 7.21.7, this option is superfluous since you can specify a socks4 proxy with -x, --proxy
using a socks4:// protocol prefix.
If this option is used several times, the last one will be used.
|
--socks4a <host[:port]>
Use the specified SOCKS4a proxy. If the port number is not specified, it is assumed at port 1080.
(Added in 7.18.0)
This option overrides any previous use of -x, --proxy, as they are mutually exclusive.
Since 7.21.7, this option is superfluous since you can specify a socks4a proxy with -x, --proxy
using a socks4a:// protocol prefix.
If this option is used several times, the last one will be used.
|
--socks5-hostname <host[:port]>
Use the specified SOCKS5 proxy (and let the proxy resolve the host name). If the port number is
not specified, it is assumed at port 1080. (Added in 7.18.0)
This option overrides any previous use of -x, --proxy, as they are mutually exclusive.
Since 7.21.7, this option is superfluous since you can specify a socks5 hostname proxy with -x,
--proxy using a socks5h:// protocol prefix.
If this option is used several times, the last one will be used. (This option was previously
wrongly documented and used as --socks without the number appended.)
|
--socks5 <host[:port]>
Use the specified SOCKS5 proxy - but resolve the host name locally. If the port number is not
specified, it is assumed at port 1080.
This option overrides any previous use of -x, --proxy, as they are mutually exclusive.
|
--socks5-gssapi-service <servicename>
The default service name for a socks server is rcmd/server-fqdn. This option allows you to change
it.
Examples: --socks5 proxy-name --socks5-gssapi-service sockd would use sockd/proxy-name --socks5
proxy-name --socks5-gssapi-service sockd/real-name would use sockd/real-name for cases where the
proxy-name does not match the principal name. (Added in 7.19.4).
|
--socks5-gssapi-nec
As part of the gssapi negotiation a protection mode is negotiated. RFC 1961 says in section
4.3/4.4 it should be protected, but the NEC reference implementation does not. The option
--socks5-gssapi-nec allows the unprotected exchange of the protection mode negotiation. (Added in
7.19.4).
|
--stderr <file>
Redirect all writes to stderr to the specified file instead. If the file name is a plain '-', it
is instead written to stdout. This option has no point when you're using a shell with decent
redirecting capabilities.
If this option is used several times, the last one will be used.
|
-t, --telnet-option <OPT=val>
Pass options to the telnet protocol. Supported options are:
TTYPE=<term> Sets the terminal type.
XDISPLOC=<X display> Sets the X display location.
NEW_ENV=<var,val> Sets an environment variable.
|
-T, --upload-file <file>
This transfers the specified local file to the remote URL. If there is no file part in the
specified URL, Curl will append the local file name. NOTE that you must use a trailing / on the
last directory to really prove to Curl that there is no file name or curl will think that your
last directory name is the remote file name to use. That will most likely cause the upload
operation to fail. If this is used on a HTTP(S) server, the PUT command will be used.
|
--tcp-nodelay
Turn on the TCP_NODELAY option. See the curl_easy_setopt(3) man page for details about this
option. (Added in 7.11.2)
|
--tftp-blksize <value>
(TFTP) Set TFTP BLKSIZE option (must be >512). This is the block size that curl will try to use
when transferring data to or from a TFTP server. By default 512 bytes will be used.
If this option is used several times, the last one will be used.
(Added in 7.20.0)
|
--tlsauthtype <authtype>
Set TLS authentication type. Currently, the only supported option is "SRP", for TLS-SRP (RFC
5054). If --tlsuser and --tlspassword are specified but --tlsauthtype is not, then this option
defaults to "SRP". (Added in 7.21.4)
|
--tlsuser <user>
Set username for use with the TLS authentication method specified with --tlsauthtype. Requires
that --tlspassword also be set. (Added in 7.21.4)
|
--tlspassword <password>
Set password for use with the TLS authentication method specified with --tlsauthtype. Requires
that --tlsuser also be set. (Added in 7.21.4)
|
--tr-encoding
(HTTP) Request a compressed Transfer-Encoding response using one of the algorithms libcurl
supports, and uncompress the data while receiving it.
(Added in 7.21.6)
|
--trace <file>
Enables a full trace dump of all incoming and outgoing data, including descriptive information, to
the given output file. Use "-" as filename to have the output sent to stdout.
This option overrides previous uses of -v, --verbose or --trace-ascii.
If this option is used several times, the last one will be used.
|
--trace-ascii <file>
Enables a full trace dump of all incoming and outgoing data, including descriptive information, to
the given output file. Use "-" as filename to have the output sent to stdout.
|
--trace-time
Prepends a time stamp to each trace or verbose line that curl displays. (Added in 7.14.0)
|
-u, --user <user:password>
Specify the user name and password to use for server authentication. Overrides -n, --netrc and
--netrc-optional.
If you just give the user name (without entering a colon) curl will prompt for a password.
If you use an SSPI-enabled curl binary and do NTLM authentication, you can force curl to pick up
the user name and password from your environment by simply specifying a single colon with this
option: "-u :".
If this option is used several times, the last one will be used.
|
-U, --proxy-user <user:password>
Specify the user name and password to use for proxy authentication.
If you use an SSPI-enabled curl binary and do NTLM authentication, you can force curl to pick up
the user name and password from your environment by simply specifying a single colon with this
option: "-U :".
If this option is used several times, the last one will be used.
|
--url <URL>
Specify a URL to fetch. This option is mostly handy when you want to specify URL(s) in a config
file.
This option may be used any number of times. To control where this URL is written, use the -o,
--output or the -O, --remote-name options.
|
-v, --verbose
Makes the fetching more verbose/talkative. Mostly useful for debugging. A line starting with '>'
means "header data" sent by curl, '<' means "header data" received by curl that is hidden in
normal cases, and a line starting with '*' means additional info provided by curl.
|
-w, --write-out <format>
Defines what to display on stdout after a completed and successful operation. The format is a
string that may contain plain text mixed with any number of variables. The string can be specified
as "string", to get read from a particular file you specify it "@filename" and to tell curl to
read the format from stdin you write "@-".
|
-x, --proxy <[protocol://][user@password]proxyhost[:port]>
Use the specified HTTP proxy. If the port number is not specified, it is assumed at port 1080.
|
-X, --request <command>
(HTTP) Specifies a custom request method to use when communicating with the HTTP server. The
specified request will be used instead of the method otherwise used (which defaults to GET). Read
the HTTP 1.1 specification for details and explanations. Common additional HTTP requests include
PUT and DELETE, but related technologies like WebDAV offers PROPFIND, COPY, MOVE and more.
(FTP) Specifies a custom FTP command to use instead of LIST when doing file lists with FTP.
If this option is used several times, the last one will be used.
|
-y, --speed-time <time>
If a download is slower than speed-limit bytes per second during a speed-time period, the download
gets aborted. If speed-time is used, the default speed-limit will be 1 unless set with -Y.
This option controls transfers and thus will not affect slow connects etc. If this is a concern
for you, try the --connect-timeout option.
If this option is used several times, the last one will be used.
|
-Y, --speed-limit <speed>
If a download is slower than this given speed (in bytes per second) for speed-time seconds it gets
aborted. speed-time is set with -y and is 30 if not set.
If this option is used several times, the last one will be used.
|
-z, --time-cond <date expression>
(HTTP/FTP/FILE) Request a file that has been modified later than the given time and date, or one
that has been modified before that time. The date expression can be all sorts of date strings or
if it doesn't match any internal ones, it tries to get the time from a given file name instead!
See the curl_getdate(3) man pages for date expression details.
Start the date expression with a dash (-) to make it request for a document that is older than the
given date/time, default is a document that is newer than the specified date/time.
If this option is used several times, the last one will be used.
|
-h, --help
Usage help.
|
-M, --manual
Manual. Display the huge help text.
|
-V, --version
Displays information about curl and the libcurl version it uses.
|