Knowledgebase:
How to collect links and download from unsupported websites
Posted by pspzockerscene psp, Last modified by pspzockerscene psp on 05 September 2022 02:55 PM

How to collect links and download from unsupported websites

JDownloader has a lot of plugins for different websites.
These plugins are the reason why it is often easy to crawl- and download links from a particular website.
However, sometimes an "easy" download is not possible because a JD plugin for the website you want to download from:

  • does not exist
  • does not exist and the website you're trying to crawl links from requires you to be logged in
  • exists but is broken
  • exists but there is no crawler plugin that can handle the specific link you want to add (e.g. "Crawl all items 'similar to' content xy")

If you want to download single video/audio streams of unsupported websites, please read this guide instead!


Link Crawler Rules

Those rules can be used to teach JD how to handle links with specific patterns or what to look for inside the html of such links.
If the website is simple and does not dynamically load content when scrolling down, you can try setting up one or multiple LinkCrawler Rules for that website.
Link Crawler Rules can be hard to setup for beginners so you might want to use the browser addons described down below anyways.
In some cases it even makes sense to combine LinkCrawler rules and the methods down below.
Example: If you want to get all pictures of a profile and that profile page is dynamically loading the links of single pictures but those are not direct-URLs then a rule for single picture that is able to extract those final URLs is needed in combination with the browser addons down below.


Collecting links with browser addons

These browser addons will be especially helpful for websites which are dynamically loading content when scrolling down in your browser.
If you want to collect links of such a website, scroll down until all items you want are visible and then use the browser addons down below.
Some websites will remove all links that aren't visible in the current browser window.
For such cases, you need to collect all links you can currently see, scroll down until more get loaded and continue like this until you reach the end of the page.

MyJDownloader

This is our own browser extension.
The main focus of our addon is to send single links to JD and pre-set package properties such as download path and so on.
While it is good at that it doesn't come with advanced features for collecting links compared to the other addons linked down below.
You can combine the usage of our addon with the ones described down below though.

Link Gopher | Open source

This open source addon helps collecting- and pre-filtering links.
Their documentation should be self-explanatory.

Linkclump | Open source

This open source addon helps collecting- and pre-filtering links.
The most powerful feature of this addon is that it will allow you to select links after pressing a specific hotkey.
By default that hotkey is hold Z button and at the same time press and hold left mouse button to select links in the highlighted area.
Important: By default, the addon will open all results in new tabs. This is not what we want. We want to copy all results into the clipboard.

To do that, go into the addons' Options and either add a new action or edit the default one.
In these instructions we'll simply edit the default one so that all results will be copied into our clipboard.
Screenshot of modified default action settings:

Screenshot of how it should look in the end:



Attachments 
 
 Linkclump_Options_Actions.png (10.40 KB)
 Linkclump_Options_Actions_Edit.png (36.02 KB)
 Logo_myjd.png (16.94 KB)
 Logo_Link_Gopher.png (14.05 KB)
 Logo_Linkclump.png (6.77 KB)
(1 vote(s))
Helpful
Not helpful