Automated daily AI news research agent that searches the web, compiles a structured briefing, publishes it to Notion and/or Obsidian, and delivers a styled summary to Microsoft Teams and Slack -- powered by Claude Code, Codex, Gemini, or Copilot CLIs with automatic fallback. Supports both macOS (launchd) and Windows (Task Scheduler). Includes an on-demand Custom Brief mode for deep multi-agent research on any topic. Obsidian integration enables graph visualization of topic relationships across briefings. Fully automated pipeline with zero manual intervention required after setup.
Note
Live AI News Notion page: https://hoangsonw.notion.site/9c34d052d9354beda82a3423e2d2f404?v=d43c53fe405c4896bfd95ad0cc22246f
AI News Briefing is a fully automated pipeline that runs every morning on your machine. It supports four AI CLI engines -- Claude Code, Codex (OpenAI), Gemini (Google), and Copilot (GitHub) -- with automatic fallback if your preferred engine is unavailable. The selected engine runs in headless mode as a news research agent: searching the web across 9 AI-related topics, compiling the results into a two-tier briefing (TL;DR + full report), writing the finished page directly to a Notion database, optionally publishing to an Obsidian vault with graph-ready wikilinks, and optionally posting a styled Adaptive Card summary to Microsoft Teams and Slack.
The entire process -- from triggering to publishing -- requires zero human intervention. You wake up, open Notion or Obsidian (or Teams or Slack), and your daily AI briefing is already there.
Custom Brief extends this with on-demand deep research: pick any topic, and 5 parallel research agents investigate it from different angles (breaking news, technical analysis, industry impact, trends, and policy). Results publish to Notion, Obsidian, Teams, and/or Slack -- or just print to your terminal.
Keeping up with AI news across models, tools, policy, funding, and open source is a full-time job. This project compresses that into an automated daily digest that covers 9 topic areas in a consistent format, delivered to your Notion workspace before you start your workday. When you need to go deeper on a specific topic, the custom brief gives you a comprehensive research report on demand.
Beyond the automated daily briefing, this project has evolved into a Unified Research Ops Ecosystem containing 10 deeply integrated intelligence plugins available natively for Claude Code, OpenAI Codex, and Gemini CLI.
The ecosystem includes 10 production-ready agents covering News & Media (ai-news, last30days, podcast-summarizer), Tech & Dev (trend-spotter, paper-reader, repo-auditor), and Business (earnings, competitor-intel, startup-scout, crypto-tracker). These plugins and skills enable AI agents to perform a wide range of research tasks -- from tracking social media trends to analyzing financial filings to summarizing technical papers.
See PLUGINS.md for the complete developer manual.
We leverage the scheduling capabilities of the OS (launchd on macOS, Task Scheduler on Windows) to trigger a shell script at a specific time each day. The script reads a prompt template, selects an AI CLI engine (via the AI_BRIEFING_CLI env var or automatic fallback: claude β codex β gemini β copilot), and invokes it in headless mode. The agentic prompt performs web searches, compiles results, and calls the Notion MCP tool to create a new page in the database.
flowchart TD
subgraph Schedulers
A1[macOS launchd]
A2[Windows Task Scheduler]
end
A1 -->|8:00 AM daily| B1[briefing.sh]
A2 -->|8:00 AM daily| B2[briefing.ps1]
B1 -->|Reads| C[prompt.md]
B2 -->|Reads| C
B1 -->|Invokes| D[AI Engine - Claude/Codex/Gemini/Copilot]
B2 -->|Invokes| D
D -->|Step 1: Search| E[WebSearch Tool]
E -->|9 topics x multiple queries| F[Web Results]
F -->|Step 2: Compile| G[Two-Tier Briefing]
G -->|Step 3: Write| H[Notion MCP]
H -->|Creates page| I[Notion Database]
D -->|Step 4: Write card| J2[logs/YYYY-MM-DD-card.json]
D -->|Step 5: Write obsidian md| J3[logs/YYYY-MM-DD-obsidian.md]
B1 -->|If Teams webhook set| T[notify-teams.sh]
B2 -->|If Teams webhook set| T2[notify-teams.ps1]
T -->|POST card.json| V[Teams Webhook]
T2 -->|POST card.json| V
V --> W[Teams Channel]
B1 -->|If Slack webhook set| S1[notify-slack.sh]
B2 -->|If Slack webhook set| S2[notify-slack.ps1]
S1 -->|Convert + POST| SV[Slack Webhook]
S2 -->|Convert + POST| SV
SV --> SW[Slack Channel]
B1 -->|If Obsidian vault set| OB1[publish-obsidian.sh]
B2 -->|If Obsidian vault set| OB2[publish-obsidian.ps1]
OB1 -->|Copy + create topics| OBV[Obsidian Vault]
OB2 -->|Copy + create topics| OBV
B1 -->|Logs output| J[logs/YYYY-MM-DD.log]
B2 -->|Logs output| J
J -->|Auto-cleanup| K[Delete logs older than 30 days]
Data flow summary:
- The platform scheduler fires the entry point script (
briefing.shon macOS,briefing.ps1on Windows) at the configured time each day. - The script reads the prompt from
prompt.mdand passes it to the selected AI CLI engine in headless mode. - The AI engine executes the prompt as an agentic task -- performing web searches, compiling results, and calling the Notion MCP tool.
- Notion receives the finished briefing as a new database page.
- Claude also writes an Obsidian-formatted markdown file with
[[wikilinks]]for graph visualization. - If the
AI_BRIEFING_TEAMS_WEBHOOKenvironment variable is set, the entry point script callsnotify-teams.sh/notify-teams.ps1, which validates and POSTs the pre-builtlogs/YYYY-MM-DD-card.jsonfile (written by Claude in Step 4) to the configured Teams webhook. - If the
AI_BRIEFING_SLACK_WEBHOOKenvironment variable is set, the entry point script callsnotify-slack.sh/notify-slack.ps1, which converts the Teams card to Slack Block Kit format and POSTs it to the configured Slack webhook. - If the
AI_BRIEFING_OBSIDIAN_VAULTenvironment variable is set, the entry point script callspublish-obsidian.sh/publish-obsidian.ps1, which copies the briefing to the vault and creates topic stub pages for graph nodes. - Logs are written to a date-stamped file and automatically pruned after 30 days.
| Requirement | Details |
|---|---|
| OS | macOS or Windows 10/11 |
| At least one AI CLI | Any of the following: Claude Code (claude), Codex (codex), Gemini (gemini), or Copilot (copilot). If multiple are installed, the system uses AI_BRIEFING_CLI or falls back automatically. |
| Notion MCP | The Notion MCP server must be configured in your AI CLI's MCP settings with access to your workspace |
| WebSearch tool | Available by default in most AI CLIs (no extra setup needed) |
| Python 3.x | Optional (legacy card builder only, not used in current flow) |
| Make (optional) | GNU Make for using the Makefile task runner (winget install GnuWin32.Make on Windows, pre-installed on macOS) |
git clone https://github.com/hoangsonww/AI-News-Briefing
cd AI-News-Briefing# Make the shell script executable
chmod +x ~/ai-news-briefing/briefing.sh
# Install the launchd plist
cp ~/ai-news-briefing/com.ainews.briefing.plist ~/Library/LaunchAgents/
launchctl load ~/Library/LaunchAgents/com.ainews.briefing.plist
# Verify the agent is registered
launchctl list | grep ainewsOptionally, set up the manual trigger command:
mkdir -p ~/.local/bin
cat > ~/.local/bin/ai-news << 'EOF'
#!/bin/bash
echo "Starting AI News Briefing..."
launchctl kickstart "gui/$(id -u)/com.ainews.briefing"
echo "Running. Check Notion or: tail -f ~/ai-news-briefing/logs/$(date +%Y-%m-%d).log"
EOF
chmod +x ~/.local/bin/ai-newsMake sure ~/.local/bin is in your PATH (add export PATH="$HOME/.local/bin:$PATH" to your ~/.zshrc if needed).
Open PowerShell and run the installer script:
cd $env:USERPROFILE\ai-news-briefing
.\install-task.ps1This registers a Task Scheduler task named AiNewsBriefing that runs daily at 8:00 AM under the current user account.
To customize the time:
.\install-task.ps1 -Hour 7 -Minute 30macOS: Edit com.ainews.briefing.plist and modify the StartCalendarInterval section, then reload:
launchctl unload ~/Library/LaunchAgents/com.ainews.briefing.plist
launchctl load ~/Library/LaunchAgents/com.ainews.briefing.plistWindows: Re-run the installer with new time parameters:
.\install-task.ps1 -Hour 9 -Minute 0Set the AI_BRIEFING_CLI environment variable to choose which engine to use:
export AI_BRIEFING_CLI=codex # or: claude, gemini, copilotIf not set, the daily briefing tries engines in fallback order: claude β codex β gemini β copilot, using the first one found. For custom briefs, pass --cli:
./custom-brief.sh --cli gemini --topic "AI safety"Set the AI_BRIEFING_MODEL environment variable to override the default model for any engine:
export AI_BRIEFING_MODEL=opusOr edit the entry point script for your platform:
- macOS:
briefing.sh-- change the--model sonnetflag - Windows:
briefing.ps1-- change the--model sonnetargument
Model trade-offs (Claude Code):
| Model | Speed | Cost | Quality |
|---|---|---|---|
haiku |
Fastest | Lowest | Good for basic summaries |
sonnet |
Balanced | Moderate | Recommended default |
opus |
Slowest | Highest | Best for deep analysis |
Edit the --max-budget-usd value in briefing.sh (macOS) or briefing.ps1 (Windows). The default is 2.00 (USD per run). This acts as a safety cap -- if the agent's token usage would exceed this amount, the run stops.
Edit prompt.md and modify the "Topics to Search" list. You can add, remove, or rename topics. If you change the number of topics, also update the "Topics": 9 value in the Notion properties section at the bottom of the prompt.
Once installed, the briefing runs automatically every day at the scheduled time (default: 8:00 AM). No action needed.
Using Make (recommended, cross-platform):
make run # Run in foreground (auto-detects engine)
make run-bg # Run in background
make run-scheduled # Trigger via OS scheduler
make run CLI=codex # Use a specific enginePlatform-native:
# macOS
ai-news
# or: launchctl kickstart "gui/$(id -u)/com.ainews.briefing"
# Windows (PowerShell or cmd)
schtasks /run /tn AiNewsBriefingmake tail # Cross-platform: tail today's logOr platform-native:
# macOS
tail -f ~/ai-news-briefing/logs/$(date +%Y-%m-%d).log
# Windows (PowerShell)
Get-Content "$env:USERPROFILE\ai-news-briefing\logs\$(Get-Date -Format 'yyyy-MM-dd').log" -WaitA typical successful run takes 2-4 minutes and ends with a message like:
2026-03-09 14:08:08 Briefing complete. Check Notion for today's report.
The project includes a cross-platform Makefile that auto-detects your OS and routes commands to the correct platform tools. Requires GNU Make.
| Target | Description |
|---|---|
make help |
Show all targets with descriptions |
make run |
Run briefing in foreground |
make run-bg |
Run briefing in background |
make run-scheduled |
Trigger via OS scheduler |
make tail |
Tail today's log live |
make log |
Print today's log |
make logs |
List all log files with sizes |
make log-date D=YYYY-MM-DD |
Print log for a specific date |
make clean-logs |
Delete logs older than 30 days |
make purge-logs |
Delete all logs |
make install |
Install platform scheduler |
make uninstall |
Remove platform scheduler |
make status |
Show scheduler status |
make check |
Verify at least one AI CLI is installed |
make validate |
Validate all project files and prompt structure |
make prompt |
Print the current prompt |
make info |
Show config summary (engines, model, paths) |
CLI=<engine> |
Parameter: choose engine (claude, codex, gemini, copilot) |
The scripts/ directory contains 13 utility script pairs (.sh for macOS/Linux, .ps1 for Windows) for managing and troubleshooting the system.
| Script | Description | Example Usage |
|---|---|---|
health-check |
Verify full setup (CLI, files, prompt structure, scheduler) | bash scripts/health-check.sh |
log-summary |
Tabular summary of recent runs with status and size | bash scripts/log-summary.sh 14 |
log-search |
Search across all logs by keyword with context | bash scripts/log-search.sh --search "Anthropic" --context 3 |
dry-run |
Run the full pipeline without writing to Notion | bash scripts/dry-run.sh --model haiku --budget 1.00 |
test-notion |
Quick Notion MCP connectivity test | bash scripts/test-notion.sh |
cost-report |
Estimate API costs from log history | bash scripts/cost-report.sh --month 2026-03 |
export-logs |
Archive logs to tar.gz (Unix) or zip (Windows) | bash scripts/export-logs.sh --from 2026-03-01 --to 2026-03-09 |
backup-prompt |
Version prompt.md with timestamped backups | bash scripts/backup-prompt.sh --list |
topic-edit |
Add, remove, or list topics in prompt.md | bash scripts/topic-edit.sh --add "AI Hardware" "GPU news" |
update-schedule |
Change daily run time | bash scripts/update-schedule.sh --hour 7 --minute 30 |
notify |
Send native OS notification for briefing status | bash scripts/notify.sh |
notify-teams |
Validate and POST pre-built card.json to Microsoft Teams webhooks |
bash scripts/notify-teams.sh or --all for multiple |
notify-slack |
Convert Teams card to Slack Block Kit and POST to Slack webhooks | bash scripts/notify-slack.sh or --all for multiple |
uninstall |
Remove scheduler; --all also removes logs and backups |
bash scripts/uninstall.sh --all |
Windows equivalents use the same names with .ps1 extension and PowerShell parameter syntax (e.g., .\scripts\health-check.ps1, .\scripts\topic-edit.ps1 -Action add -Name "AI Hardware" -Description "GPU news").
The prompt expects a Notion database with at least these properties:
| Property | Type | Example Value |
|---|---|---|
Date |
Title | 2026-03-09 - AI Daily Briefing |
Status |
Select or Text | Complete |
Topics |
Number | 9 |
You can add additional properties to the database (tags, priority, etc.), but the three above are what the agent writes to.
The prompt references a specific Notion data source ID:
856794cc-d871-4a95-be2d-2a1600920a19
To use your own database, replace this value in prompt.md (in the Step 3 section). To find your data source ID:
- Open Claude Code and ensure the Notion MCP is connected.
- Ask Claude: "List my Notion data sources" or use the
notion-searchMCP tool. - Copy the
data_source_idfor the database you want to use. - Replace the ID in
prompt.md.
Each generated page contains:
- TL;DR -- 10-15 bullet points covering the biggest stories (roughly a 1-minute read)
- Divider
- Full Briefing -- 9 sections (one per topic), each with 3-8 detailed bullet points and source attribution
- Key Takeaways table -- a summary table of major trends and signals
The notification system supports both Microsoft Teams and Slack after a successful briefing run. Both channels are optional and can be enabled independently.
- Shared source artifact: Claude writes one card file,
logs/YYYY-MM-DD-card.json, in Step 4. - Teams path:
notify-teams.sh/.ps1validates JSON and POSTs the payload directly to Teams webhook URL(s). - Slack path:
notify-slack.sh/.ps1converts the same card file to Slack Block Kit usingscripts/teams-to-slack.py, then POSTs to Slack webhook URL(s). - Fan-out support: Both channels support semicolon-separated webhook URLs.
flowchart LR
A["Claude writes logs/YYYY-MM-DD-card.json"] --> B{"Channel"}
B --> C["notify-teams.sh / notify-teams.ps1"]
B --> D["notify-slack.sh / notify-slack.ps1"]
C --> E["POST Adaptive Card JSON to Teams webhooks"]
D --> F["teams-to-slack.py conversion"]
F --> G["POST Block Kit JSON to Slack webhooks"]
- Create a Teams webhook (Power Automate workflow).
Full guide: NOTIFY_TEAMS.md - Set
AI_BRIEFING_TEAMS_WEBHOOK. - Run the briefing or call
notify-teamsdirectly.
- Create a Slack app incoming webhook at api.slack.com/apps.
- Set
AI_BRIEFING_SLACK_WEBHOOK. - Run the briefing or call
notify-slackdirectly.
macOS / Linux
# Teams
export AI_BRIEFING_TEAMS_WEBHOOK="https://first-teams-webhook"
# Slack
export AI_BRIEFING_SLACK_WEBHOOK="https://hooks.slack.com/services/T.../B.../..."
# Obsidian vault
export AI_BRIEFING_OBSIDIAN_VAULT="/path/to/your/obsidian/vault"Windows (PowerShell)
[Environment]::SetEnvironmentVariable("AI_BRIEFING_TEAMS_WEBHOOK", "https://first-teams-webhook", "User")
[Environment]::SetEnvironmentVariable("AI_BRIEFING_SLACK_WEBHOOK", "https://hooks.slack.com/services/T.../B.../...", "User")
[Environment]::SetEnvironmentVariable("AI_BRIEFING_OBSIDIAN_VAULT", "C:\path\to\your\obsidian\vault", "User")Configure multiple URLs with semicolons:
export AI_BRIEFING_TEAMS_WEBHOOK="https://teams-webhook-1;https://teams-webhook-2"
export AI_BRIEFING_SLACK_WEBHOOK="https://slack-webhook-1;https://slack-webhook-2"notify-teams/notify-slackdefault: send to first URL only.--all(bash) /-All(PowerShell): send to all configured URLs.- Current
briefing.shandbriefing.ps1call both notifiers with all URLs.
# First URL only
bash scripts/notify-teams.sh
bash scripts/notify-slack.sh
# All configured URLs
bash scripts/notify-teams.sh --all
bash scripts/notify-slack.sh --allmacOS / Linux
bash scripts/notify-teams.sh --all --card-file logs/2026-03-24-card.json
bash scripts/notify-slack.sh --all --card-file logs/2026-03-24-card.jsonWindows (PowerShell)
.\scripts\notify-teams.ps1 -All -CardFile .\logs\2026-03-24-card.json
.\scripts\notify-slack.ps1 -All -CardFile .\logs\2026-03-24-card.jsonFor full setup and troubleshooting:
In addition to the daily automated briefing, you can run a deep research briefing on any topic on demand. This spawns 5 parallel research agents that investigate the topic from different angles, then synthesizes findings into a comprehensive news briefing.
flowchart LR
subgraph "Input"
A["--topic 'AI in healthcare'"]
B["--notion --obsidian --teams --slack"]
end
A --> C[custom-brief.sh / .ps1]
B --> C
C -->|5 parallel agents| D[Deep Research]
D --> E[Structured Briefing]
E --> F[Terminal Output]
E -->|optional| G[Notion]
E -->|optional| G2[Obsidian Vault + Graph]
E -->|optional| H[Teams]
E -->|optional| I[Slack]
# Full research with all destinations
./custom-brief.sh --topic "AI in healthcare" --notion --obsidian --teams --slack
# Terminal + Notion + Obsidian
./custom-brief.sh -t "quantum computing" -n -o
# Obsidian only (for graph visualization)
./custom-brief.sh -t "AI coding tools" -o
# Use a specific engine
./custom-brief.sh --cli codex --topic "AI safety" --notion
# Interactive mode (prompts for topic, engine, and destinations)
./custom-brief.sh
# PowerShell
.\custom-brief.ps1 -Topic "AI regulation EU" -Notion -Obsidian -Teams
# PowerShell with engine override
.\custom-brief.ps1 -Cli gemini -Topic "AI regulation EU" -Notion
# Make
make custom-brief T="open source LLMs" NOTION=1 OBSIDIAN=1 TEAMS=1
make custom-brief T="AI safety" CLI=codex NOTION=1- TL;DR -- 5-10 bullet points with key findings
- Thematic sections -- 3-6 sections organized by theme with linked citations and dates
- Key Trends & Outlook -- strategic implications table
- Sources -- numbered list of every URL cited
Every finding includes a clickable source link and publication date. Full details: CUSTOM_BRIEF.md
The prompt (prompt.md) instructs Claude to execute four sequential steps within a single agentic session:
Claude uses the WebSearch tool to perform multiple searches per topic, targeting news from the past 24-48 hours. The search strategy includes date-qualified queries like "[topic] news today 2026-03-09" and company-specific queries.
Search results are synthesized into a two-tier format:
- Tier 1 (TL;DR): 10-15 one-sentence bullet points covering the top stories across all topics. Designed as a quick-scan summary.
- Tier 2 (Full Briefing): 9 sections with detailed coverage, source attribution, and a closing Key Takeaways table.
Claude checks whether a page for today already exists (captured during Step 0b). If one exists, it updates the page in place. If not, it creates a new page in the target database. This prevents duplicate pages when the briefing runs multiple times in a day.
Claude writes a complete Adaptive Card JSON payload to logs/YYYY-MM-DD-card.json. This is the exact file that gets POSTed to the Teams webhook -- no parser or transformation sits between the AI output and Teams. The card includes a styled header, TL;DR bullets, topic sections, and an action button linking to the full Notion page.
Here is what a card looks like in Teams:
And in Slack:
| # | Topic | What It Covers |
|---|---|---|
| 1 | Claude Code / Anthropic | New features, releases, Anthropic announcements, blog posts |
| 2 | OpenAI / Codex / ChatGPT | Model updates, Codex features, ChatGPT capabilities, API changes |
| 3 | AI Coding IDEs | Cursor, Windsurf, GitHub Copilot, Xcode AI, JetBrains AI, Google Antigravity |
| 4 | Agentic AI Ecosystem | Agent frameworks (LangChain, CrewAI, AutoGen), MCP updates, new agent products |
| 5 | AI Industry | New model releases, benchmarks, major company announcements |
| 6 | Open Source AI | Llama, Mistral, DeepSeek, Hugging Face, open-weight model releases |
| 7 | AI Startups & Funding | Funding rounds, acquisitions, notable startup launches |
| 8 | AI Policy & Regulation | Government policy, EU AI Act, state laws, AI safety developments |
| 9 | Dev Tools & Frameworks | Vercel, Next.js, React Native, TypeScript, AI-related developer tooling |
All logs are stored in the logs/ directory within the project:
| File | Contents |
|---|---|
YYYY-MM-DD.log |
Full output from each run (timestamps, Claude output, success/failure) |
launchd-stdout.log |
(macOS only) Standard output captured by launchd |
launchd-stderr.log |
(macOS only) Standard error captured by launchd |
Additionally, the logs/YYYY-MM-DD-card.json file contains the raw Adaptive Card JSON generated by Claude for Teams notifications.
macOS:
cat ~/ai-news-briefing/logs/$(date +%Y-%m-%d).log
tail -f ~/ai-news-briefing/logs/$(date +%Y-%m-%d).logWindows:
Get-Content "$env:USERPROFILE\ai-news-briefing\logs\$(Get-Date -Format 'yyyy-MM-dd').log"
Get-Content "$env:USERPROFILE\ai-news-briefing\logs\$(Get-Date -Format 'yyyy-MM-dd').log" -WaitLogs older than 30 days are automatically deleted at the end of each run on both platforms. The macOS-specific launchd-stdout.log and launchd-stderr.log files are not date-stamped and may need periodic manual cleanup.
We have several test suites, with 201 non-blocking tests that verify syntax, structure, arg handling, template substitution, card JSON, notification error paths, Obsidian publishing, and cross-platform portability. No external services are called.
# All bash tests (macOS / Linux / Git Bash)
bash tests/run-all.sh
# Individual suites
bash tests/test-custom-brief.sh
bash tests/test-daily-brief.sh
bash tests/test-notifications.sh
bash tests/test-portability.sh# PowerShell (Windows)
powershell -ExecutionPolicy Bypass -File tests\test-all.ps1| Suite | Tests | Coverage |
|---|---|---|
test-custom-brief.sh |
37 | Args, template substitution, prompt structure, skill |
test-daily-brief.sh |
56 | Prompt steps, 9 topics, 8 changelog URLs, entry scripts, dedup file |
test-notifications.sh |
37 | Card JSON validity, Adaptive Card structure, converter, error handling |
test-portability.sh |
26 | Bash 3.2 compat, awk, date, -f not -x, ANSI color safety |
test-all.ps1 |
91 | PowerShell syntax, all prompts, template substitution, cards, docs |
Full documentation: TESTS.md
This error occurs when the CLAUDECODE environment variable is set, which happens if you trigger the script from inside a Claude Code terminal session. Both briefing.sh and briefing.ps1 unset this variable automatically, but if you see this error:
- Make sure you are running the briefing from a regular terminal, not from within Claude Code.
- Verify the entry point script contains the
unset CLAUDECODE/$env:CLAUDECODE = $nullline.
macOS (launchd):
- Mac was asleep: launchd will run the job when the Mac wakes up if the scheduled time was missed. If Power Nap is disabled or the lid was closed, the job may not fire until the next login.
- Powered off at scheduled time: The job is skipped entirely for that day.
- Agent not loaded: Verify with
launchctl list | grep ainews. If missing, reload the plist. - Path issues: The plist sets a custom
PATHandHOME. If Claude is installed in a non-standard location, update thePATHin the plist.
Windows (Task Scheduler):
- Machine was off/asleep:
StartWhenAvailableis enabled, so the task runs as soon as the machine wakes or the user logs in. - Task not registered: Verify with
schtasks /query /tn AiNewsBriefing. If missing, re-runinstall-task.ps1. - Execution policy: The task action uses
-ExecutionPolicy Bypass. If this is overridden by group policy, contact your IT admin or runbriefing.ps1manually.
- Check that the Notion MCP is configured in Claude Code's MCP settings.
- Verify the data source ID in
prompt.mdmatches a database your Notion integration has access to. - Look at the log output -- Claude typically prints a Notion URL on success.
If the log shows the run stopped mid-way, the --max-budget-usd cap may have been reached. Increase the budget in the entry point script or switch to a cheaper model.
Running the briefing multiple times in a day updates the existing Notion page rather than creating a duplicate. The agent checks for an existing page during Step 0b and updates it if found. Logs append to the same date-stamped file, so all runs for a given day are captured in one log.
If the configured engine is unavailable (not installed, quota exceeded, or authentication expired), the system handles it differently depending on the mode:
- Daily briefing (automatic fallback): When
AI_BRIEFING_CLIis not set, the entry script tries each engine in order:claudeβcodexβgeminiβcopilot. If an engine is not found onPATH, it is skipped. If all engines fail, the run is logged as a failure. - Daily briefing (explicit engine): When
AI_BRIEFING_CLIis set to a specific engine, only that engine is tried. If it fails, no fallback occurs and the run is logged as a failure. - Custom brief: In interactive mode, the REPL shows which engines are available (β/β) so you can pick one that works. In non-interactive mode, pass
--cli <engine>to choose explicitly.
To see which engines are installed on your machine:
make infoWith the default configuration (opus model, 9 topics):
| Component | Estimated Cost per Run |
|---|---|
| Input tokens (prompt + search results) | ~$0.30-0.60 |
| Output tokens (briefing + tool calls) | ~$0.20-0.40 |
| WebSearch tool calls (~15-25 searches) | ~$0.15-0.40 |
| Total per run | ~$0.70-1.40 |
| Monthly (daily runs) | ~$21-42 |
| Custom Brief (on-demand) | ~$1.50-3.00 |
Actual costs vary based on the volume of news, number of search queries, and briefing length. The --max-budget-usd 2.00 cap ensures no single run exceeds $2.00.
ai-news-briefing/
βββ index.html # Landing page / project wiki
βββ wiki/ # Landing page assets
β βββ style.css # Styles
β βββ script.js # Interactions
βββ Makefile # Cross-platform task runner
βββ scripts/ # Utility scripts (sh + ps1 pairs)
β βββ health-check.sh/.ps1 # Verify full setup
β βββ log-summary.sh/.ps1 # Summarize recent runs
β βββ log-search.sh/.ps1 # Search across all logs
β βββ dry-run.sh/.ps1 # Test without writing to Notion
β βββ test-notion.sh/.ps1 # Test Notion MCP connectivity
β βββ cost-report.sh/.ps1 # Estimate API costs from logs
β βββ export-logs.sh/.ps1 # Archive logs to tar.gz/zip
β βββ backup-prompt.sh/.ps1 # Version and restore prompt.md
β βββ topic-edit.sh/.ps1 # Add/remove/list topics
β βββ update-schedule.sh/.ps1 # Change daily run time
β βββ notify.sh/.ps1 # Send native OS notifications
β βββ notify-teams.sh/.ps1 # Post briefing to Microsoft Teams
β βββ notify-slack.sh/.ps1 # Post briefing to Slack
β βββ teams-to-slack.py # Convert Teams Adaptive Card JSON to Slack Block Kit
β βββ build-teams-card.py # Legacy card builder (not used in current flow)
β βββ uninstall.sh/.ps1 # Full cleanup and removal
βββ briefing.sh # Daily briefing entry point (bash)
βββ briefing.ps1 # Daily briefing entry point (PowerShell)
βββ prompt.md # Daily briefing agent prompt
βββ custom-brief.sh # Custom topic briefing entry point (bash)
βββ custom-brief.ps1 # Custom topic briefing entry point (PowerShell)
βββ prompt-custom-brief.md # Custom briefing deep research prompt
βββ commands/
β βββ ai-news-briefing.md # Claude Code skill: daily briefing
β βββ custom-brief.md # Claude Code skill: custom topic briefing
βββ com.ainews.briefing.plist # macOS launchd schedule definition
βββ install-task.ps1 # Windows Task Scheduler installer
βββ tests/ # 201 non-blocking tests
β βββ run-all.sh # Bash test runner
β βββ test-custom-brief.sh # Custom brief tests
β βββ test-daily-brief.sh # Daily briefing tests
β βββ test-notifications.sh # Notification pipeline tests
β βββ test-portability.sh # Cross-platform portability tests
β βββ test-all.ps1 # PowerShell test suite
βββ logs/ # Run logs (git-ignored)
βββ backups/ # Prompt backups (git-ignored)
βββ .gitignore
βββ ARCHITECTURE.md # Detailed architecture documentation
βββ E2E_FLOW.md # End-to-end pipeline walkthrough
βββ CUSTOM_BRIEF.md # Custom topic briefing documentation
βββ TESTS.md # Test suite documentation
βββ LOGS.md # Log tailing and management guide
βββ SETUP.md # Full setup guide
βββ NOTIFY_TEAMS.md # Teams integration setup guide
βββ NOTIFY_SLACK.md # Slack integration setup guide
βββ README.md # This file
Created by Son Nguyen -- AI researcher and developer focused on building tools that empower people to harness the power of AI in their daily lives.
Thanks for checking out the project! If you have any questions, suggestions, or want to contribute, feel free to open an issue or submit a pull request.




