This project is centered around a local website interface: a dark multi-chat workspace where you can talk to the AI normally, upload files, paste links, and review results in a persistent chat history.
It supports multiple AI providers through LiteLLM, including OpenAI, Anthropic, Google Gemini, Together AI (Llama), and MiniMax.
People are very welcome to improve this project, remix it, fork it, stress-test it, and point out weak spots. If you find better libraries, cleaner architecture, safer workflows, stronger prompts, better UI ideas, or broken edge cases, I genuinely want that feedback.
Set TEXT_AI_MODEL in your .env to any of these:
| Model | Provider |
|---|---|
gpt-5.4-pro |
OpenAI |
gpt-5.4-mini |
OpenAI |
gpt-4o |
OpenAI |
gpt-4o-mini |
OpenAI |
claude-opus-4-6 |
Anthropic |
claude-sonnet-4-6 |
Anthropic |
claude-haiku-4-5 |
Anthropic |
claude-3.5-sonnet |
Anthropic |
gemini-3.1-pro |
|
gemini-3-flash |
|
gemini-2.5-flash-lite |
|
llama-4-maverick |
Together AI |
llama-4-scout |
Together AI |
llama-3.3-70b |
Together AI |
minimax-m2.7 |
MiniMax |
minimax-m2.5-lightning |
MiniMax |
You can also pass any LiteLLM-compatible model string directly.
- normal AI chat with memory per chat
- multiple chats in a left sidebar
- rename, clear, delete, and clear-all chat controls
- upload files directly in the browser
- summarize websites
- summarize YouTube links
- analyze documents
- analyze images
- analyze audio files
- analyze video files
The website is designed to feel like a normal AI chat screen, but with the project's existing multimodal extraction pipeline behind it.
- multi-chat sidebar
- chat limit of 10
- per-chat memory
- per-chat drafts and attachment state
- pause button while the AI is working
- local persistent history through SQLite
- banner hide/show toggle
The website can work with:
- websites
- YouTube links
- text questions
- uploaded documents
- uploaded images
- uploaded audio
- uploaded video
Supported file types include:
- text and markup:
.txt,.md,.csv,.json,.html,.xml
- documents:
.pdf,.docx,.pptx,.xlsx,.rtf
- images:
.png,.jpg,.jpeg,.avif
- audio:
.mp3,.wav,.m4a,.aac,.flac,.ogg
- video:
.mp4,.mov
The project tries to avoid dead-end failures by using layered extraction.
- page text extraction
- related useful URL collection
- image review when relevant
- directly downloadable website-video review when possible
- final summary focused on the subject matter, not just page structure
The current YouTube flow is transcript-first:
- optional
YouTube Data APImetadata youtube-transcript-apiyt-dlpsubtitle attemptDownSub + PlaywrightSaveSubs + Playwright- metadata fallback
That means the app can still give something useful even if direct YouTube access is partially blocked.
The active visual-description path uses your configured AI model.
That means:
- image descriptions come from your AI model
- video key-frame descriptions come from your AI model
- the older BLIP caption path is no longer the normal active description flow
The media pipeline can separate:
- transcript / speech analysis
- visual analysis
- music analysis
So a silent video can still be reviewed visually, and a music-heavy file can still produce music analysis even if speech transcription is weak.
The music layer is free/local-friendly by default:
Essentia- local music features such as BPM, key, and loudness-like values
AcoustID- optional song ID using local fingerprinting plus an API key
MIRFLEX- optional repo hook for future music tagging/classification extensions
Important:
Essentiais the default main music feature layerAcoustIDis optionalMIRFLEXis optional- if one music stage fails, the others should still continue
This project tries to stay honest about what actually happened.
It carries extra runtime and extraction context into the summary pipeline, including things like:
- which YouTube path actually succeeded
- which music libraries were attempted
- which music libraries produced output
- recent runtime diary lines
- which media was actually reviewed
That helps reduce fake claims like saying a fallback worked when it did not.
Most important files for the website version:
- website entrypoint:
src/ai_scraper_bot/webapp.py
- web backend:
src/ai_scraper_bot/web/service.pysrc/ai_scraper_bot/web/store.py
- web frontend:
src/ai_scraper_bot/web/static/index.htmlsrc/ai_scraper_bot/web/static/app.csssrc/ai_scraper_bot/web/static/app.js
- shared config:
src/ai_scraper_bot/config.py
- shared prompts:
src/ai_scraper_bot/prompts.py
- summarizer / LiteLLM integration:
src/ai_scraper_bot/services/summarizer.py
- YouTube extraction:
src/ai_scraper_bot/services/youtube.py
- website extraction:
src/ai_scraper_bot/services/website.py
- transcript-site fallbacks:
src/ai_scraper_bot/services/downsub.pysrc/ai_scraper_bot/services/savesubs.py
- transcription:
src/ai_scraper_bot/services/transcription.py
- local video analysis:
src/ai_scraper_bot/services/video_analysis.py
- local vision:
src/ai_scraper_bot/services/vision.py
- local music analysis:
src/ai_scraper_bot/services/music_analysis.py
- file parsing:
src/ai_scraper_bot/parsers/file_parser.py
The full setup guide is in docs/SETUP.md.
At a high level:
- install Python 3.11
- install system tools like
ffmpegandtesseract - create and activate
.venv - install
requirements.txt - install Playwright Chromium
- create
.envfrom.env.example - fill in
TEXT_AI_MODELandTEXT_AI_API_KEYwith your chosen AI provider - optionally add audio transcription, YouTube Data API, AcoustID, and MIRFLEX settings
- run the website
cd "/path/to/project"
source .venv/bin/activate
PYTHONPATH=src python -m ai_scraper_bot.webappThen open:
http://127.0.0.1:8000
Useful website env vars:
WEBAPP_HOSTWEBAPP_PORTWEBAPP_DB_PATH
Default local values are:
- host:
127.0.0.1 - port:
8000 - database:
./.webapp/webapp.sqlite
If you want a solid default setup:
ENABLE_LOCAL_VISION=trueENABLE_MUSIC_DETECTION=trueMUSIC_ESSENTIA_ENABLED=trueMUSIC_ACOUSTID_ENABLED=falseuntil AcoustID is configuredMUSIC_MIRFLEX_ENABLED=falseuntil MIRFLEX is actually set upYOUTUBE_COOKIE_MODE_ENABLED=falseYOUTUBE_DOWNSUB_ENABLED=trueYOUTUBE_SAVESUBS_ENABLED=trueYOUTUBE_TRANSCRIPT_SITE_HEADLESS=true
Before sharing or publishing, do not expose:
.env.venv- downloaded test media
- cookies files
- browser profile exports
- real API keys
- local machine-specific secrets
The repo is meant to keep those out through .gitignore, but you should still double-check before uploading.
That does not always mean the whole app is broken. Check which stage actually failed:
youtube-transcript-apiyt-dlpDownSubSaveSubs- metadata fallback
The local Whisper setup is currently intended to run with numpy<2.
Some .mp4 or .mov files are silent. In that case:
- transcript-based audio analysis will not run
- visual review can still run
- music analysis can only run if an audio stream actually exists
The current code treats MIRFLEX as an optional repo hook. The rest of the music chain should still continue even if MIRFLEX itself is not fully wired.
For the exact detailed installation and configuration flow, read docs/SETUP.md.
For a complete reference of every environment variable, see ENVREADME.md.