Also in:
monitor_aztec_prover.sh installs and removes a Telegram monitoring agent for an Aztec prover stack. It does not install or manage the prover node itself — only monitoring (see Aztec — running a prover). The same script also offers Aztec/Prover version and container status checks.
The agent parses Node, Broker, and Agent logs (from Docker containers prover-node, prover-broker, prover-agent, or from log files you configure). It sends a formatted report to Telegram on a schedule (default: every 5 minutes) and can optionally merge in Prometheus metrics if PROVER_METRICS_URL is set.
- 📨 Install / remove the monitoring agent (interactive menu; EN / RU / TR)
- 🤖 Validates Telegram bot token and chat ID(s) before saving
- 📡 Summaries for Node, Broker, and Agent: errors, warnings, L2/sync hints, job stats, optional metrics block
- 🖥 Hostname, public IP, recent
ERROR:lines, last L1 publish tx link (mainnet / Sepolia viaNETWORK) - ⏱ systemd timer (
prover-monitor-agent.timer) when installed as root; otherwise onlyagent.shis created - 📁 Optional: read logs from files instead of
docker logs(PROVER_*_LOG_FILE) - 📋 View logs for
prover-node,prover-broker, orprover-agent(docker logs --tail 500 -ffrom an interactive submenu; requires Docker) - 🔢 Check Aztec/Prover version — runs the CLI
--versioninside a running prover container (requires Docker) - 📦 Check containers — shows whether
prover-node,prover-broker, andprover-agentare running; if the node is up, scans recent logs for sync-related lines
| Area | Description |
|---|---|
| Config file | $HOME/.env-aztec-prover — token, chat IDs, optional paths and metrics URL |
| Agent directory | PROVER_MONITOR_AGENT_DIR (default $HOME/aztec-prover-agent) — contains agent.sh and agent.log |
| Containers | Detects prover-node, prover-broker, prover-agent (or uses log files if set) |
| Metrics | If PROVER_METRICS_URL points to a Prometheus scrape page, adds L2 height, rewards, balance, etc. |
| Timer | OnBootSec=120, then every 300s — same as in-script reports |
-
Telegram (prepare first): Create a bot via @BotFather and obtain its token. In the bot you created, send
/start(required). You can find your Chat ID with @myidbot.Requirements: bash, curl, jq (
sudo apt install jqon Debian/Ubuntu). Docker is typical so the agent can read container logs and so menu items 3–5 (logs, version, container checks) can run. For the systemd timer, run the install (option 1) with sudo or as root. -
Run — one-line command (download from GitHub, chmod, run):
curl -o monitor_aztec_prover.sh https://raw.githubusercontent.com/pittpv/aztec-prover-monitoring/main/monitor_aztec_prover.sh && chmod +x monitor_aztec_prover.sh && ./monitor_aztec_prover.sh
For subsequent runs:
cd $HOME && ./monitor_aztec_prover.sh
(Use the directory where you saved the script if it is not
$HOME.) -
Optional: pass language as the first argument:
en,ru, ortr. -
In the menu choose 1 — enter TELEGRAM_BOT_TOKEN, TELEGRAM_CHAT_ID (one or more), and optionally MONITOR_CENTRAL_CHAT_ID for a shared dashboard chat.
-
To install the systemd timer, run option 1 with sudo or as root (e.g. after
su -/sudo su -):sudo ./monitor_aztec_prover.sh # or as root (from the directory with the script): ./monitor_aztec_prover.sh
The agent parses lines that are emitted when specific debug scopes are enabled (polls to the broker, epoch/slot on the node, broker DB epochs, agent job lifecycle). Set LOG_LEVEL per service as below (per-service LOG_LEVEL — monitoring: polls, epoch/slot, broker/agent debug).
| Service | Variable (in node/broker .env) |
Value |
|---|---|---|
| prover-node | LOG_LEVEL_NODE → LOG_LEVEL in compose |
info;debug:json-rpc:client;debug:prover-node:epoch-monitor |
| prover-broker | LOG_LEVEL_BROKER → LOG_LEVEL in compose |
info;debug:prover-client:proving-broker |
| prover-agent | LOG_LEVEL_AGENT → LOG_LEVEL in compose |
info;debug:prover-client:proving-agent;debug:prover-client:proving-agent:job-controller |
If you run with plain info only (or without these debug scopes), the Telegram report may omit getProvingJob poll counts, epoch / slot, broker epochs, and agent job / proof timings — while ERROR / WARN counts still work.
| Variable | Meaning |
|---|---|
TELEGRAM_BOT_TOKEN |
Bot token from @BotFather |
TELEGRAM_CHAT_ID |
Comma-separated chat IDs for this host’s alerts |
MONITOR_CENTRAL_CHAT_ID |
Optional; if set, reports go here (e.g. central monitoring); defaults to same as TELEGRAM_CHAT_ID |
PROVER_MONITOR_AGENT_DIR |
Where agent.sh and agent.log live |
PROVER_NODE_LOG_FILE, PROVER_BROKER_LOG_FILE, PROVER_AGENT_LOG_FILE |
If set and readable, used instead of docker logs |
PROVER_METRICS_URL |
HTTP(S) URL returning Prometheus text for extra metrics in the message |
NETWORK |
mainnet or testnet — affects Etherscan vs Sepolia links for L1 txs |
PROVER_MONITOR_LOG_TAIL, PROVER_MONITOR_LOG_MAX_SIZE |
Tail lines per log; rotate local agent.log when too large |
- Install monitoring agent (Telegram) — creates
agent.sh, writes/updates.env-aztec-prover, installs systemd units when run as root - Remove monitoring agent — stops/disables timer (with sudo), removes agent directory
- View logs — choose Node, Broker, or Agent; streams container logs (needs Docker and a running container)
- Check Aztec/Prover version —
docker exec+ CLI--versionin a prover container (needs Docker) - Check containers — status of node/broker/agent; optional sync hints from
prover-nodelogs (needs Docker)
0.Exit
- Status
systemctl status prover-monitor-agent.timer - Logs:
journalctl -u prover-monitor-agent.service -n 50
This script is not an official Aztec product and is provided as is.
Questions, bugs, or feedback:
https://t.me/+DLsyG6ol3SFjM2Vk
MIT License
