diff --git a/docs/options.md b/docs/options.md index 4f6c1458..8dd967da 100644 --- a/docs/options.md +++ b/docs/options.md @@ -8,63 +8,75 @@
-h |
- --help |
+ ‑h |
+ ‑‑help |
+ Print this help message and exit | |
| - | --version |
+ ‑‑version |
+ Print program version and exit | ||
-i |
- --input-file |
+ ‑i |
+ ‑‑input‑file |
+ FILE |
Download URLs found in FILE (- for stdin). More than one --input-file can be specified |
-d |
- --destination |
+ ‑d |
+ ‑‑destination |
+ PATH |
Target location for file downloads |
-D |
- --directory |
+ ‑D |
+ ‑‑directory |
+ PATH |
Exact location for file downloads |
-f |
- --filename |
+ ‑f |
+ ‑‑filename |
+ FORMAT |
Filename format string for downloaded files (/O for "original" filenames) |
| - | --proxy |
+ ‑‑proxy |
+ URL |
Use the specified proxy | |
| - | --source-address |
+ ‑‑source‑address |
+ IP |
Client-side IP address to bind to | |
| - | --user-agent |
+ ‑‑user‑agent |
+ UA |
User-Agent request header | |
| - | --clear-cache |
+ ‑‑clear‑cache |
+ MODULE |
Delete cached login sessions, cookies, etc. for MODULE (ALL to delete everything) | |
| - | --cookies |
+ ‑‑cookies |
+ FILE |
File to load additional cookies from | |
| - | --cookies-from-browser |
+ ‑‑cookies‑from‑browser |
+ BROWSER |
Name of the browser to load cookies from, with optional keyring name prefixed with +, profile prefixed with :, and container prefixed with :: (none for no container) |
-q |
- --quiet |
+ ‑q |
+ ‑‑quiet |
+ Activate quiet mode | |
-v |
- --verbose |
+ ‑v |
+ ‑‑verbose |
+ Print various debugging information | |
-g |
- --get-urls |
+ ‑g |
+ ‑‑get‑urls |
+ Print URLs instead of downloading | |
-G |
- --resolve-urls |
+ ‑G |
+ ‑‑resolve‑urls |
+ Print URLs instead of downloading; resolve intermediary URLs | |
-j |
- --dump-json |
+ ‑j |
+ ‑‑dump‑json |
+ Print JSON information | |
-s |
- --simulate |
+ ‑s |
+ ‑‑simulate |
+ Simulate data extraction; do not download anything | |
-E |
- --extractor-info |
+ ‑E |
+ ‑‑extractor‑info |
+ Print extractor defaults and settings | |
-K |
- --list-keywords |
+ ‑K |
+ ‑‑list‑keywords |
+ Print a list of available keywords and example values for the given URLs | |
| - | --list-modules |
+ ‑‑list‑modules |
+ Print a list of available extractor modules | ||
| - | --list-extractors |
+ ‑‑list‑extractors |
+ Print a list of extractor classes with description, (sub)category and example URL | ||
| - | --write-log |
+ ‑‑write‑log |
+ FILE |
Write logging output to FILE | |
| - | --write-unsupported |
+ ‑‑write‑unsupported |
+ FILE |
Write URLs, which get emitted by other extractors but cannot be handled, to FILE | |
| - | --write-pages |
+ ‑‑write‑pages |
+ Write downloaded intermediary pages to files in the current directory to debug problems |
-r |
- --limit-rate |
+ ‑r |
+ ‑‑limit‑rate |
+ RATE |
Maximum download rate (e.g. 500k or 2.5M) |
-R |
- --retries |
+ ‑R |
+ ‑‑retries |
+ N |
Maximum number of retries for failed HTTP requests or -1 for infinite retries (default: 4) |
| - | --http-timeout |
+ ‑‑http‑timeout |
+ SECONDS |
Timeout for HTTP connections (default: 30.0) | |
| - | --sleep |
+ ‑‑sleep |
+ SECONDS |
Number of seconds to wait before each download. This can be either a constant value or a range (e.g. 2.7 or 2.0-3.5) | |
| - | --sleep-request |
+ ‑‑sleep‑request |
+ SECONDS |
Number of seconds to wait between HTTP requests during data extraction | |
| - | --sleep-extractor |
+ ‑‑sleep‑extractor |
+ SECONDS |
Number of seconds to wait before starting data extraction for an input URL | |
| - | --filesize-min |
+ ‑‑filesize‑min |
+ SIZE |
Do not download files smaller than SIZE (e.g. 500k or 2.5M) | |
| - | --filesize-max |
+ ‑‑filesize‑max |
+ SIZE |
Do not download files larger than SIZE (e.g. 500k or 2.5M) | |
| - | --chunk-size |
+ ‑‑chunk‑size |
+ SIZE |
Size of in-memory data chunks (default: 32k) | |
| - | --no-part |
+ ‑‑no‑part |
+ Do not use .part files | ||
| - | --no-skip |
+ ‑‑no‑skip |
+ Do not skip downloads; overwrite existing files | ||
| - | --no-mtime |
+ ‑‑no‑mtime |
+ Do not set file modification times according to Last-Modified HTTP response headers | ||
| - | --no-download |
+ ‑‑no‑download |
+ Do not download any files | ||
| - | --no-postprocessors |
+ ‑‑no‑postprocessors |
+ Do not run any post processors | ||
| - | --no-check-certificate |
+ ‑‑no‑check‑certificate |
+ Disable HTTPS certificate validation |
-c |
- --config |
+ ‑c |
+ ‑‑config |
+ FILE |
Additional configuration files |
-o |
- --option |
+ ‑o |
+ ‑‑option |
+ OPT |
Additional <key>=<value> option values |
| - | --ignore-config |
+ ‑‑ignore‑config |
+ Do not read default configuration files |
-u |
- --username |
+ ‑u |
+ ‑‑username |
+ USER |
Username to login with |
-p |
- --password |
+ ‑p |
+ ‑‑password |
+ PASS |
Password belonging to the given username |
| - | --netrc |
+ ‑‑netrc |
+ Enable .netrc authentication data | ||
| - | --download-archive |
+ ‑‑download‑archive |
+ FILE |
Record all downloaded or skipped files in FILE and skip downloading any file already in it | |
-A |
- --abort |
+ ‑A |
+ ‑‑abort |
+ N |
Stop current extractor run after N consecutive file downloads were skipped |
-T |
- --terminate |
+ ‑T |
+ ‑‑terminate |
+ N |
Stop current and parent extractor runs after N consecutive file downloads were skipped |
| - | --range |
+ ‑‑range |
+ RANGE |
Index range(s) specifying which files to download. These can be either a constant value, range, or slice (e.g. 5, 8-20, or 1:24:3) |
|
| - | --chapter-range |
+ ‑‑chapter‑range |
+ RANGE |
Like --range, but applies to manga chapters and other delegated URLs |
|
| - | --filter |
- Python expression controlling which files to download. Files for which the expression evaluates to False are ignored. Available keys are the filename-specific ones listed by -K. Example: --filter "image_width >= 1000 and rating in (s, q)" |
+ ‑‑filter |
+ EXPR |
+ Python expression controlling which files to download. Files for which the expression evaluates to False are ignored. Available keys are the filename-specific ones listed by -K. Example: --filter "image_width >= 1000 and rating in ('s', 'q')" |
| - | --chapter-filter |
+ ‑‑chapter‑filter |
+ EXPR |
Like --filter, but applies to manga chapters and other delegated URLs |
|
| - | --zip |
+ ‑‑zip |
+ Store downloaded files in a ZIP archive | ||
| - | --ugoira-conv |
+ ‑‑ugoira‑conv |
+ Convert Pixiv Ugoira to WebM (requires FFmpeg) | ||
| - | --ugoira-conv-lossless |
+ ‑‑ugoira‑conv‑lossless |
+ Convert Pixiv Ugoira to WebM in VP9 lossless mode | ||
| - | --ugoira-conv-copy |
+ ‑‑ugoira‑conv‑copy |
+ Convert Pixiv Ugoira to MKV without re-encoding any frames | ||
| - | --write-metadata |
+ ‑‑write‑metadata |
+ Write metadata to separate JSON files | ||
| - | --write-info-json |
+ ‑‑write‑info‑json |
+ Write gallery metadata to a info.json file | ||
| - | --write-tags |
+ ‑‑write‑tags |
+ Write image tags to separate text files | ||
| - | --mtime-from-date |
+ ‑‑mtime‑from‑date |
+ Set file modification times according to date metadata |
||
| - | --exec |
- Execute CMD for each downloaded file. Example: --exec convert {} {}.png && rm {} |
+ ‑‑exec |
+ CMD |
+ Execute CMD for each downloaded file. Example: --exec "convert {} {}.png && rm {}" |
| - | --exec-after |
- Execute CMD after all files were downloaded successfully. Example: --exec-after cd {} && convert * ../doc.pdf |
+ ‑‑exec‑after |
+ CMD |
+ Execute CMD after all files were downloaded successfully. Example: --exec-after "cd {} && convert * ../doc.pdf" |
-P |
- --postprocessor |
+ ‑P |
+ ‑‑postprocessor |
+ NAME |
Activate the specified post processor |
| {} | -{} | +{} | +{} | {} |