What does the Folder Watch addon do?
Folder Watch is an addon which can be installed and enabled via Settings -> Extensions (scroll down) -> Folder Watch
Folder Watch allows you to let JD monitor one- or multiple folders for special .crawljob files and containers such as .DLC files.
This can be useful if you are e.g. running JD on a server and want it to process all URLs added via this method without the need of any further user interaction in JD.
After activating the Folder Watch addon, JD is by default monitoring the folder [JD Install dir]/folderwatch every 1000ms.
Processed files will be moved to a subfolder inside the watched folder called "added", e.g. to [JD Install dir]/folderwatch/added.
To start you can do this simple test using a .DLC container:
1. Install and enable Folder Watch.
2. Open JDownloader and export some added URLs as .DLC container via Rightclick -> Other -> Create DLC
3. Delete the previously exported URLs in JD and move the created .DLC file in the above mentioned "folderwatch" default folder.
4. After some seconds, the URLs inside your .DLC container will appear in your LinkGrabber and the .DLC container will be moved to "[JD Install dir]folderwatch/added".
If you want to use the full potential of Folder Watch, continue reading!
.crawljob files and how to use them:
Using .crawljob files, you can tell JD how to process the URLs which will get added whenever it processes said .crawljob files.
The below explained .crawljob files can also be used without FolderWatch to e.g. to add URLs once with custom package name/download path and so on!
You can add .crawljob files to the LinkGrabber just like adding .DLC containers or normal URLs! The FolderWatch extension has to be enabled though to make this work!
Here is an overview of all possible crawljob fields:
UNSET = Existing global setting will be used
You may as well leave out fields completely instead of using the value UNSET.
||Data-type / example(s)
||Text containing URL(s) to add
||Enable/disable items added via this crawljob
||Set this if password is required to add this URL e.g. folder password.
||Sets download path
||Max. number of connections per download
chunks=0 | Use globally defined limit
chunks=3 | try to force 3 connections
Use only if text contains a single URL only!
||Enforce immediate downloadstart?
||Auto start downloads after this item has been added to the LinkGrabber?
||Auto move added item(s) to downloadlist
||Should properties set via this crawljob overwrite packagizer rules?
Deep-analyze URLs added via this crawljob?
Useful for URLs of websites which are not supported via JD plugin e.g. URLs which will contain more (direct-downloadable) URLs inside HTML code.
||List of possible extraction passwords.
There are two allowed crawljob formats. Both can be used by storing the data as .crawljob file and moving this file into the folder which JD is watching.
Format 1: Text format. This can only hold one crawljob:
The above example is attached to this article as example1.crawljob!
Format 2: json format: You can put multiple crawljobs into one file:
The above example is attached to this article as example1_json.crawljob!
Please keep in mind that syntax errors in your json will lead to a failure when trying to process your crawljob.
It will be moved to the "added" folder anyways!
Save one of the above variants in a .crawljob file or see attached files
What does the above example do?
You can guess most of it by looking at the LinkGrabber tab in JD once the URL gets added but to make it clear here is a screenshot with an explanation below:
- Adds URL
- Sets two possible extract passwords on all added items: "
Password1" and "
- Disables extraction
- Sets download password "
- Enables added item(s)
- Sets packagename to
- Sets filename to
- Enables auto confirm and autostart -> Added item(s) will get moved to the downloadlist and downloads will be started after a few seconds
- Sets download-folder to