Skip to content

allanbian1017/llm-wiki

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

12 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

LLM Wiki

This is an LLM-maintained knowledge base, designed to act as an automated, intelligent repository for your papers, articles, transcripts, and personal notes. Rather than organizing and summarizing everything manually, this wiki relies on autonomous AI workflows to ingest new information, build conceptual connections, and retrieve knowledge upon request.

Core Architecture

This repository maintains strict separation between immutable source files and the dynamic knowledge graph generated and managed by the LLM.

  • raw/: The immutable storage directory. Drop your source files (PDFs, Markdown, transcripts) here. The LLM will strictly read these files and never modify or delete them.
  • wiki/: Structured directory generated by the LLM containing:
    • summaries/: Detailed summary pages for each ingested raw document.
    • concepts/: Extracted granular entities and distinct concepts linking back to relevant sources.
  • index.md: The central catalog where the LLM records all entities, concepts, and source summaries. It serves as the entry point for navigating the synthesized knowledge.
  • log.md: The chronological ingestion log that tracks what the LLM has processed and added to the wiki.
  • AGENTS.md: The foundational schema and configuration rulebook that instructs the LLM on how to manage, maintain, and structure the wiki.
  • .agents/: Hidden directories containing the workflows and skills (e.g., ingest, query, lint) that power the AI agent's operations.

How to Use this Wiki

The entire premise of this project is to interact with it via an LLM. Here are the primary workflows you can trigger:

1. Ingesting New Knowledge (ingest)

  1. Drop a new source file (e.g., a PDF of an article) into the raw/ folder.

  2. Ask the LLM to process it by invoking the skill:

    Use /ingest to process the new file in the raw folder
    
  3. What happens? The AI reads the source, saves a summary into wiki/summaries/, extracts concepts into individual Markdown files within wiki/concepts/, reviews and updates existing concept files when relevant, updates index.md with links, and appends the run to log.md.

2. Querying and Synthesis (query)

Ask the LLM a complex question related to your notes using the query skill:

Use /query to answer: What is the core theme of the articles ingested last week?
  • What happens? The AI reads index.md to identify relevant pages, reads those individual markdown pages, and synthesizes a well-researched answer supported entirely by your knowledge base. It can even generate Derived Outputs like slides or graphs if requested.

3. Linting and Maintenance (lint)

Ask the LLM to keep the wiki healthy by invoking the linting skill:

Use /lint to health-check the wiki
  • What happens? The AI will actively scan for orphan pages, flag contradictory information, ensure cross-references are linked correctly, and optionally use web searches to fill in missing gaps or suggest new concept pages.

Built With

This project relies on AI agent workflows (as documented in AGENTS.md and the existing Skills) and can seamlessly integrate with standard Markdown-based knowledge management systems like Obsidian. Local settings for such external editors are ignored via .gitignore to maintain a clean repository.

Inspiration

This project is inspired by Andrej Karpathy's LLM wiki thought. You can read the original thought process here.

About

An LLM-maintained personal knowledge base

Topics

Resources

Stars

Watchers

Forks