This is an LLM-maintained knowledge base, designed to act as an automated, intelligent repository for your papers, articles, transcripts, and personal notes. Rather than organizing and summarizing everything manually, this wiki relies on autonomous AI workflows to ingest new information, build conceptual connections, and retrieve knowledge upon request.
This repository maintains strict separation between immutable source files and the dynamic knowledge graph generated and managed by the LLM.
raw/: The immutable storage directory. Drop your source files (PDFs, Markdown, transcripts) here. The LLM will strictly read these files and never modify or delete them.wiki/: Structured directory generated by the LLM containing:summaries/: Detailed summary pages for each ingested raw document.concepts/: Extracted granular entities and distinct concepts linking back to relevant sources.
index.md: The central catalog where the LLM records all entities, concepts, and source summaries. It serves as the entry point for navigating the synthesized knowledge.log.md: The chronological ingestion log that tracks what the LLM has processed and added to the wiki.AGENTS.md: The foundational schema and configuration rulebook that instructs the LLM on how to manage, maintain, and structure the wiki..agents/: Hidden directories containing the workflows and skills (e.g.,ingest,query,lint) that power the AI agent's operations.
The entire premise of this project is to interact with it via an LLM. Here are the primary workflows you can trigger:
-
Drop a new source file (e.g., a PDF of an article) into the
raw/folder. -
Ask the LLM to process it by invoking the skill:
Use /ingest to process the new file in the raw folder -
What happens? The AI reads the source, saves a summary into
wiki/summaries/, extracts concepts into individual Markdown files withinwiki/concepts/, reviews and updates existing concept files when relevant, updatesindex.mdwith links, and appends the run tolog.md.
Ask the LLM a complex question related to your notes using the query skill:
Use /query to answer: What is the core theme of the articles ingested last week?
- What happens? The AI reads
index.mdto identify relevant pages, reads those individual markdown pages, and synthesizes a well-researched answer supported entirely by your knowledge base. It can even generate Derived Outputs like slides or graphs if requested.
Ask the LLM to keep the wiki healthy by invoking the linting skill:
Use /lint to health-check the wiki
- What happens? The AI will actively scan for orphan pages, flag contradictory information, ensure cross-references are linked correctly, and optionally use web searches to fill in missing gaps or suggest new concept pages.
This project relies on AI agent workflows (as documented in AGENTS.md and the existing Skills) and can seamlessly integrate with standard Markdown-based knowledge management systems like Obsidian. Local settings for such external editors are ignored via .gitignore to maintain a clean repository.
This project is inspired by Andrej Karpathy's LLM wiki thought. You can read the original thought process here.