Skip to content

FrostByte0x/crawljob-api

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

25 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Build and Push Docker image

crawljob-api

A lightweight REST API written in Go that generates .crawljob files for JDownloader. Drop a download URL, get a crawljob file picked up automatically by JDownloader.

Built to run as a Docker container.


Web Interface

A web interface is available at / and /downloads. It offers a simple text field and one-click action to submit a download URL. The /downloads page lists all files in the download directory and lets you download them directly from the browser.

HTML and CSS courtesy of Claude; the purpose of this project is to write the API, not create web interfaces.

How it works

  1. You send a POST /jobs request with a download URL
  2. The API validates the URL (scheme, allowed domains)
  3. A .crawljob file is generated and dropped into a watched folder
  4. JDownloader picks it up and starts the download automatically
  5. Query GET /api/files to list completed downloads
  6. Retrieve a specific file with GET /download?filename=<name>

Getting Started

Run with Docker

docker run -d \
  -p 8080:8080 \
  -e CRAWLJOB_FOLDER=/mnt/crawljobs \
  -e ENABLE_PURGE=true \
  -e PURGE_FILES_AGE_IN_HOURS=48 \
  -v /your/download/path:/mnt/downloads \
  -v /your/crawljob/path:/mnt/crawljobs \
  ghcr.io/frostbyte0x/crawljob-api:latest

Build locally

This will start the web server on port 8080

git clone https://github.com/FrostByte0x/crawljob-api
cd crawljob-api
go run main.go

Configuration

Variable Description Default
CRAWLJOB_FOLDER Folder watched by JDownloader . (current dir)
ALLOWED_DOMAINS Allowed download domains 1fichier.com,mega.nz
ENABLE_PURGE Enable the background purge job false
PURGE_FILES_AGE_IN_HOURS Delete files older than N hours (requires ENABLE_PURGE=true) 24

API Reference

POST /jobs

Submit a download URL.

Request Body

{
  "url": "https://1fichier.com/yourfile"
}

Responses

Code Description
201 Created Job file successfully created
400 Bad Request Invalid URL or malformed body
405 Method Not Allowed Only POST is accepted

GET /api/files

List all files and directories in the download folder.

Response Body

[
  {
    "Name": "movie.mkv",
    "Type": "file",
    "Extension": ".MKV",
    "Size": "4.2 GB"
  },
  {
    "Name": "archive",
    "Type": "dir",
    "Extension": "DIR",
    "Size": "0 B"
  }
]

Responses

Code Description
200 OK JSON array of files returned
403 Forbidden Download folder cannot be accessed

GET /download?filename=<name>

Stream a file from the download folder to the client.

Query Parameters

Parameter Description
filename Name of the file to download (must be within the download directory)

Responses

Code Description
200 OK File streamed as attachment
403 Forbidden Path traversal attempt or folder inaccessible
404 Not Found No filename provided

GET /download/folder?folder=<name>

Stream a folder as a .zip archive to the client. Files are stored uncompressed (zip Store method).

Query Parameters

Parameter Description
folder Name of the folder to download (must be within the download directory)

Responses

Code Description
200 OK Folder streamed as a .zip attachment
403 Forbidden Path traversal attempt detected
404 Not Found Folder does not exist

Allowed Domains

This can be changed in the Dockerfile configuration using ALLOWED_DOMAINS

Currently restricted to:

  • 1fichier.com
  • mega.nz

Contact the server owner or set your own domain list to extend this.


Project Structure

crawljob-api/
├── main.go             # Server entrypoint
├── handler/
│   ├── job.go          # HTTP handler
│   ├── download_ui.go  # HTTP handler for /downloads (web interface)
│   ├── validator.go    # URL validation
│   ├── download.go     # File listing, file download, and folder zip download
│   └── ui.go           # HTTP handler for / (web interface)
├── jobs/
│   └── purge.go        # Background purge job (deletes old files)
├── model/
│   ├── crawljob.go     # CrawlJob model + file generation
│   └── utils.go        # Helpers
└── Dockerfile

License

MIT

About

Lightweight REST API written in Go that generates .crawljob files for JDownloader and offers a portal to download requested files.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors