What is a LinkCrawler Rule?
LinkCrawler Rules can be used to do automatically treat added URLs of websites which are not supported via plugin/by default and perform specified actions on them.
You can add as many rules as you like and they can also be chained e.g. the results of rule 1 get processed again by rule 2.
LinkCrawler Rules are part of JDownloaders advanced features.
You can find them under Settings -> Advanced Settings -> LinkCrawler: LinkCrawlerRules
Click in the Value field so you can modify the field and replace the content with your rule(s).
Also make sure that the "LinkCrawlerRules Checkbox" (first setting in screenshot below) is enabled.
There is no GUI available for this feature.
If you are only here to find out how to add a pre-given LinkCrawler rule to JD, you may stop reading here but if you want to know how to create your own LinkCrawler Rules, continue reading.
Here is a list of LinkCrawler Rule types and simple examples on what they can be used for.
- DEEPDECRYPT: Auto-deep scan URLs of websites which are not supported via specified plugin
- REWRITE: Change URLs added to JD to
- DIRECTHTTP: Make JD accept certain URLs as direct-downloadable URLs e.g. URLs that do not have a file-extension in them (DIRECTHTTP)
Can also be used to make JD accept URLs containing unsupported/rare file-extensions
- FOLLOWREDIRECT: Allows JD to accept unsupported URLs that simply redirect from website/location A to B
- SUBMITFORM: Allows JD to accept certain URLs and submit a HTTP Form found inside html code which matches pattern
No matter which type of rule you use - afterwards JD will auto-grab URLs matching your defined "pattern" (see below) also via clipboard observation.
A basic knowledge of Regular Expressions is recommended before you get started.
You can easily test your regular expressions with the regex101.com online tool.
Our knowledgebase contains common examples but if you need to create "more complicated" rules you may find examples in our support forum and of course you can contact our staff if you get stuck.
Basic example of the structure of a LinkCrawler Rule:
"enabled" : true,
"cookies" : [ ["key", "value"] ],
"updateCookies" : true,
"logging" : false,
"maxDecryptDepth" : 1,
"id" : 1000001540111,
"name" : "example rule",
"pattern" : "https://(?:www\\.)?example\\.com/(.+)",
"rule" : "DEEPDECRYPT",
"packageNamePattern" : "<title>(.*?)</title>",
"passwordPattern" : null,
"formPattern" : null,
"deepPattern" : null,
"rewriteReplaceWith" : "https://example2.com/$1"
LinkCrawler Rules are stored as a json array.
Especially if you have multiple rules it can be a good idea to use a json editor to work on them e.g. jsoneditoronline.org or jsonformatter.org.
JD will only allow you to add rules with a valid json structure!
Make sure that special chars like quotation marks are correctly escaped so that your json is valid!
Explanation of all possible fields:
Depending on the type of your LinkCrawler rule, only some of these fields are required.
||Data-type / example
||Relevant for rule- type(s)
||enables/disables this rule
||Here you can put in your personal cookies e.g. login cookies of websites which JD otherwise fails to parse.
Also if "updateCookies" is enabled, JD will update these with all cookies it receives from the website(s) that match pattern.
"cookies" : [ ["phpssid", "ffffffffffvoirg7ffffffffff"] ]
||If the target websites returns new cookies, save these inside this rule and update this rule.
||Enable this for support purposes.
Logs of your LinkCrawler Rules can be found in your JD install dir/logs/:
||How many layers deep do should your rule crawl (e.g. rule returns URLs matching the same rule again - how often is this chain allowed to happen?)
||Auto generated ID of the rule
||Name of the rule
||RegEx: This rule will be used for all URLs matching this pattern
||Type of the rule e.g.
DEEPDECRYPT or REWRITE or DIRECTHTTP or
FOLLOWREDIRECT or SUBMITFORM
HTML RegEx: All URLs crawled by this rule will go into one package if the RegEx returns a result
|| HTML RegEx: Pattern to find extraction passwords
||HTML RegEx: Find- and submit HTML Form
||HTML RegEx: Which URLs should this rule inside HTML code.
null = auto scan- and return all supported URLs found in HTML code.
||Pattern for new URL