A lightweight REST API written in Go that generates .crawljob files for JDownloader. Drop a download URL, get a crawljob file picked up automatically by JDownloader.
Built to run as a Docker container.
A web interface is available at / and /downloads. It offers a simple text field and one-click action to submit a download URL. The /downloads page lists all files in the download directory and lets you download them directly from the browser.
HTML and CSS courtesy of Claude; the purpose of this project is to write the API, not create web interfaces.
- You send a
POST /jobsrequest with a download URL - The API validates the URL (scheme, allowed domains)
- A
.crawljobfile is generated and dropped into a watched folder - JDownloader picks it up and starts the download automatically
- Query
GET /api/filesto list completed downloads - Retrieve a specific file with
GET /download?filename=<name>
docker run -d \
-p 8080:8080 \
-e CRAWLJOB_FOLDER=/mnt/crawljobs \
-e ENABLE_PURGE=true \
-e PURGE_FILES_AGE_IN_HOURS=48 \
-v /your/download/path:/mnt/downloads \
-v /your/crawljob/path:/mnt/crawljobs \
ghcr.io/frostbyte0x/crawljob-api:latestThis will start the web server on port 8080
git clone https://github.com/FrostByte0x/crawljob-api
cd crawljob-api
go run main.go| Variable | Description | Default |
|---|---|---|
CRAWLJOB_FOLDER |
Folder watched by JDownloader | . (current dir) |
ALLOWED_DOMAINS |
Allowed download domains | 1fichier.com,mega.nz |
ENABLE_PURGE |
Enable the background purge job | false |
PURGE_FILES_AGE_IN_HOURS |
Delete files older than N hours (requires ENABLE_PURGE=true) |
24 |
Submit a download URL.
Request Body
{
"url": "https://1fichier.com/yourfile"
}Responses
| Code | Description |
|---|---|
201 Created |
Job file successfully created |
400 Bad Request |
Invalid URL or malformed body |
405 Method Not Allowed |
Only POST is accepted |
List all files and directories in the download folder.
Response Body
[
{
"Name": "movie.mkv",
"Type": "file",
"Extension": ".MKV",
"Size": "4.2 GB"
},
{
"Name": "archive",
"Type": "dir",
"Extension": "DIR",
"Size": "0 B"
}
]Responses
| Code | Description |
|---|---|
200 OK |
JSON array of files returned |
403 Forbidden |
Download folder cannot be accessed |
Stream a file from the download folder to the client.
Query Parameters
| Parameter | Description |
|---|---|
filename |
Name of the file to download (must be within the download directory) |
Responses
| Code | Description |
|---|---|
200 OK |
File streamed as attachment |
403 Forbidden |
Path traversal attempt or folder inaccessible |
404 Not Found |
No filename provided |
Stream a folder as a .zip archive to the client. Files are stored uncompressed (zip Store method).
Query Parameters
| Parameter | Description |
|---|---|
folder |
Name of the folder to download (must be within the download directory) |
Responses
| Code | Description |
|---|---|
200 OK |
Folder streamed as a .zip attachment |
403 Forbidden |
Path traversal attempt detected |
404 Not Found |
Folder does not exist |
This can be changed in the Dockerfile configuration using ALLOWED_DOMAINS
Currently restricted to:
1fichier.commega.nz
Contact the server owner or set your own domain list to extend this.
crawljob-api/
├── main.go # Server entrypoint
├── handler/
│ ├── job.go # HTTP handler
│ ├── download_ui.go # HTTP handler for /downloads (web interface)
│ ├── validator.go # URL validation
│ ├── download.go # File listing, file download, and folder zip download
│ └── ui.go # HTTP handler for / (web interface)
├── jobs/
│ └── purge.go # Background purge job (deletes old files)
├── model/
│ ├── crawljob.go # CrawlJob model + file generation
│ └── utils.go # Helpers
└── Dockerfile
MIT