Set webhook_url in [notifications]. Any HTTP POST endpoint works: Slack incoming webhooks, Discord webhooks, ntfy.sh topic URLs, etc.
[notifications]
webhook_url = "https://ntfy.sh/my-topic"
on_failure = true
on_recovery = true
on_backoff = falsePayload: {"event": "failure"|"recovery"|"backoff", "mount": "name", "error": "...", "timestamp": "..."}
GET /metrics returns plain text in Prometheus exposition format. Metrics include:
sshfs_keeper_mount_healthy{name}— 1 if healthysshfs_keeper_mount_count{name}— total successful mountssshfs_keeper_mount_retry_count{name}— current retriessshfs_keeper_mount_duration_seconds{name}— last mount durationsshfs_keeper_sync_run_count{name}— total rsync runssshfs_keeper_sync_bytes_sent{name}— bytes sent last run
The dashboard subscribes to GET /api/events (Server-Sent Events). When a mount status changes the server pushes an event; the browser reloads the page. The SSE indicator dot in the header turns green when connected.
Add identity_passphrase to the mount config. The daemon calls ssh-add before each mount attempt.
[[mount]]
name = "nas"
remote = "user@host:/path"
local = "/mnt/nas"
identity = "/home/user/.config/sshfs-keeper/keys/id_ed25519"
identity_passphrase = "my passphrase"Note: This stores the passphrase in plaintext in config.toml — use file permissions (600) to restrict access.
Instead of embedding user@host in every mount/sync, define hosts once in the [host] section and reference them by name:
[[host]]
name = "nas"
hostname = "192.168.1.10"
user = "miro"
port = 22
identity = "/home/miro/.config/sshfs-keeper/keys/nas_rsa"
[[mount]]
name = "photos"
host_name = "nas" # references the host above
path = "/media/photos"
local = "/mnt/photos"Benefits:
- Single source of truth for each host (hostname, user, port, identity)
- Web UI shows a host dropdown + path picker instead of free-text
user@host:/pathinput - File browser: click "Browse" next to path input to navigate remote directories
- Share hosts across multiple mounts/syncs without retyping credentials
The web UI Hosts tab lets you add, edit, and delete hosts. Mounts and syncs can only reference a host if they explicitly set host_name.
Automatic migration: When the daemon loads an old-format config (with embedded user@host:/path strings and no explicit hosts), it:
- Parses each
remote/source/targetstring to extractuser@hostname - Creates a
HostConfigentry for each unique host - Updates mounts/syncs to set
host_nameandpathfields - Saves the migrated config back to disk on next write
No manual action needed — your old config.toml continues to work. The first time you edit a mount or add a new one via the web UI, the new host-based format is written to disk. Old configs that haven't been edited still load and work fine with the free-text remote string.
Example:
# Old config (still works, auto-migrates on load)
[[mount]]
name = "nas"
remote = "miro@nas:/media"
local = "/mnt/nas"
# After daemon restart (or web UI edit), config.toml becomes:
[[host]]
name = "nas"
hostname = "nas"
user = "miro"
[[mount]]
name = "nas"
host_name = "nas"
path = "/media"
remote = "miro@nas:/media" # preserved for reference
local = "/mnt/nas"sshfs-keeper reloadSends SIGHUP to the running daemon. Adds new mounts, removes deleted ones; preserves runtime state (retry counts, backoff) for existing mounts.
Two defensive layers protect your config from accidental data loss:
-
save() guard — If code calls
save()with 0 mounts but the on-disk config has mounts, the write is refused. An ERROR is logged with the stack trace. This prevents bugs or half-installed package code from overwriting your config. -
load() auto-restore — If the config file contains 0 mounts but the backup (
config.toml.bak) has mounts, the daemon automatically restores from backup at startup and logs a WARNING.
Why this matters: During package reinstall, if the service writes config using broken/partially-installed code, it could write an empty config. The hook now stops the service before install/reinstall, and these guards catch issues if something still goes wrong.
Check logs if you see [ERROR] in the config load output:
journalctl --user -u sshfs-keeper -n 20 | grep -i "wipe\|restore\|guard"Set mount_tool = "rclone" in the mount config. The remote field accepts either SSH format (user@host:/path, auto-converted to :sftp,host=…,user=…:path) or a pre-configured rclone remote (myremote:/path):
[[mount]]
name = "nas"
remote = "miro@192.168.1.10:/media/hdd"
local = "/mnt/nas"
mount_tool = "rclone"rclone must be installed and --allow-other must be permitted in /etc/fuse.conf. This is the recommended backend for macOS and Windows (requires WinFsp).
Set sync_tool = "lsyncd" in the sync config. lsyncd is invoked with --oneshot so it performs one sync pass and exits, matching the interval-based scheduler:
[[sync]]
name = "mirror"
source = "/local/data"
target = "user@host:/remote/data"
interval = 300
sync_tool = "lsyncd"lsyncd must be installed. For local-to-local sync default.rsync is used; for remote targets default.rsyncssh is used.
Set the targets array in the sync config to sync the same source to multiple destinations:
[[sync]]
name = "backup"
source = "/home/user/data"
target = "backup1@host1:/backups/data"
targets = ["backup2@host2:/backups/data", "backup3@host3:/backups/data"]
interval = 3600
sync_tool = "rsync"Each target is synced sequentially. The sync card shows combined metrics (total bytes, files transferred) and fail_count tracks failures across all targets. In the web UI, click Edit on a sync card to add more "Additional destinations".
sshfs-keeper install-serviceDetects your OS and writes the appropriate service file:
- Linux:
~/.config/systemd/user/sshfs-keeper.service→systemctl --user enable --now sshfs-keeper - macOS:
~/Library/LaunchAgents/com.sshfs-keeper.plist→launchctl load ~/Library/LaunchAgents/com.sshfs-keeper.plist - Windows:
%APPDATA%\sshfs-keeper\install-service.bat(requires NSSM)
If /proc/mounts contains an autofs entry covering the mount point (or any parent directory), sshfs-keeper skips its own remount logic and marks the mount HEALTHY. autofs handles on-demand mounting; keeper and autofs would conflict if both tried to remount.
When a sync job fails, the consecutive fail_count is incremented. Once fail_count >= max_retries (default 3 from [daemon]), the next retry is scheduled at backoff_base * 2^(fail_count - max_retries) seconds instead of the full interval. On success, fail_count resets to 0 and normal scheduling resumes.
The fail_count is exposed in GET /api/syncs (snapshot) so the dashboard can show it.
While a sync is running, the sync card shows a live progress bar (0-100%), elapsed time, and the most recent progress line (e.g. "transferred 42% — 123.4M" from rsync).
After the sync completes, click the 📋 Log button on a sync card in the web UI, or:
curl http://localhost:8765/api/syncs/<name>/logReturns the last 50 lines of stdout+stderr from the most recent run. The Sync tab auto-refreshes every 3 seconds while active so progress updates live.
The dashboard automatically shows a usage bar on healthy mounts. It calls os.statvfs() on the local mount point. If the mount point is inaccessible the bar is hidden.
~/.config/sshfs-keeper/daemon.pid — written at startup, removed on shutdown.
Set log_file in [daemon]. Uses a 5 MB rotating file handler with 3 backups.
[daemon]
log_file = "/var/log/sshfs-keeper.log"Yes — the [notifications] block is always written to config.toml. Previously it was only written when webhook_url was set, silently losing on_failure=false etc. on next save.
curl http://localhost:8765/api/version
# {"version": "0.1.0"}Dark mode is the default. The 🌙/🌕 button in the header toggles light mode. The preference is saved in localStorage and persisted across sessions.
When a mount uses mount_tool = "rclone" (instead of the default sshfs), the card header shows a rclone badge in addition to the health status badge.
After max_retries consecutive failures (default: 3), the sync manager applies exponential backoff: backoff_base * 2^(fail_count - max_retries) seconds between retries (default base: 60s). The fail count resets to 0 on the first successful run.
GET /api/syncs returns {"syncs": [...]} (not a bare list). The earlier bare-list variant was a duplicate route that was silently shadowed by FastAPI — it has been removed.
Root cause: AppConfig.save() previously used path.write_text() which is not atomic. If systemd sends SIGKILL (after a SIGTERM timeout), the file write is interrupted mid-way, leaving a truncated config.toml with 0 mounts.
Fix (deployed in commit d1caa14): save() now writes to a sibling .tmp file, calls os.fsync(), then os.replace() — which is atomic on Linux. Also keeps a .bak of the previous config. Combined with TimeoutStopSec=30 in the systemd unit, this prevents corruption.
uv tool install builds a wheel and copies files to ~/.local/share/uv/tools/sshfs-keeper/. It may use a cached build. Always run with --no-cache after changing source:
uv tool install --force . --no-cache