Skip to content

shihwesley/shihwesley-plugins

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

33 Commits
 
 
 
 
 
 
 
 

Repository files navigation

shihwesley-plugins

A personal collection of Claude Code plugins built to sharpen AI-assisted development — better codebase understanding, tighter context windows, structured workflows, and faster research loops.

Seven plugins across three categories, each solving a specific friction point in agent-driven development. Together, they compose into an automated orchestration pipeline that takes a plan from outline to committed code.

Anthropic Skill Guide Compliant

All skills in this repository and across all shihwesley plugins follow Anthropic's Complete Guide to Building Skills for Claude (January 2026). This means:

  • Every skill lives in its own folder with a SKILL.md entry point
  • Frontmatter uses kebab-case name fields matching folder names
  • Descriptions follow the [what it does] + [when to use it] + [key capabilities] formula with explicit trigger phrases
  • Progressive disclosure: core instructions in SKILL.md, detailed docs in references/
  • No README.md inside skill folders
  • A skill-template/ is provided for creating new compliant skills

Skills included: release, create-skill-graph, traverse-template, skill-template, interactive-planning, orchestrator

Table of Contents

Install

/plugin marketplace add shihwesley/shihwesley-plugins

Then install individual plugins as needed (see tables below).

Plugins

Codebase Intelligence

Help your AI agent understand, map, and efficiently consume codebases.

Plugin What it does Install
mercator-ai Merkle-enhanced codebase mapping with O(1) change detection /plugin install mercator-ai@shihwesley-plugins
chronicler Ambient .tech.md generation with freshness tracking /plugin install chronicler@shihwesley-plugins
code-simplifier-tldr TLDR-aware codebase simplifier — surveys via AST summaries, edits surgically /plugin install code-simplifier-tldr@shihwesley-plugins
  • mercator-ai — Generates CODEBASE_MAP.md with file purposes, architecture layers, and dependency graphs. Uses a merkle manifest (docs/.mercator.json) so re-runs only re-analyze changed files instead of rescanning everything.
  • chronicler — Watches your source files and auto-generates .tech.md docs alongside them. Tracks freshness per file — flags stale docs when the source changes, so documentation stays current without manual upkeep. Reads mercator-ai's merkle manifest to know which files changed, so it only regenerates docs for what's actually different.
  • code-simplifier-tldr — A TLDR-aware agent that simplifies your codebase. Reads AST summaries to survey file structure, identifies simplification targets (dead code, redundant abstractions, over-engineered patterns), then requests only the specific line ranges it needs to edit. Integrates with mercator-ai's merkle tree so it only considers files whose hash actually changed. Logs every change to a simplification log for auditability.

Workflow & Environment

Structure your planning process and manage runtime environments without manual switching.

Plugin What it does Install
interactive-planning File-based planning with interactive gates and task tracking /plugin install interactive-planning@shihwesley-plugins
orbit Ambient dev environment management — auto-switches dev/test/staging/prod via Docker /plugin install orbit@shihwesley-plugins
  • interactive-planning — Combines Manus-style file-based planning with spec-driven multi-file architecture. In task mode, it creates a single task_plan.md with phases and dependencies. In spec mode, it generates a manifest with separate spec files per component. Uses AskUserQuestion at interactive gates to pause and get user input at key decision points — prevents agents from charging ahead on the wrong path.
  • orbit — Classifies what you're doing (running tests, debugging, deploying) and auto-switches the right Docker environment. Manages container lifecycle, sidecars, and port mapping across dev/test/staging/prod so you never run tests against the wrong database.

Research & Extraction

Sandboxed experimentation and capability extraction from external sources.

Plugin What it does Install
neo-research Research pipeline — turns any topic into agent expertise via structured fetch, .mv2 indexing, and skill generation (21 MCP tools) /plugin install neo-research@shihwesley-plugins
agent-reverse Reverse engineer capabilities from repos, configs, articles into your workflow /plugin install agent-reverse@shihwesley-plugins
  • neo-research — Research pipeline for Claude Code. One command turns any topic into agent expertise: question tree, zero-context fetch, .mv2 indexing, REPL distillation, skill/subagent generation. Includes coupling assessment that flags domains where sub-topics form a web — when detected, recommends creating a skill graph (see below). Also runs Python in isolated Docker containers with 21 MCP tools — code execution, sub-agent orchestration, session persistence, and research automation, all sandboxed so nothing touches your host machine.
  • agent-reverse — Point it at a GitHub repo, local config, binary, or article and it extracts capabilities, patterns, and skills into your agent workflow. Includes security scanning, manifest tracking, and cross-agent restore so you can port setups between machines.

Skill Graphs

When you accumulate enough skills, docs, and agents for a domain, agents start struggling to find the right one. They grep linearly through directories, load files they don't need, and burn context on irrelevant content.

Skill graphs solve this. They're navigable Maps of Content (MOCs) that give agents a 3-read path to the right knowledge: read the index → pick a MOC → read the target file. Instead of scanning 35+ files, the agent reads 3.

The concept comes from arscontexta, a Claude Code plugin that builds Zettelkasten-style knowledge systems using Niklas Luhmann's slip-box methodology — MOCs as hubs, wikilinks for connections, progressive disclosure through layers. Their approach is designed for personal knowledge management: you describe how you think, have a conversation, and get a second brain as markdown files.

We took the MOC structure and adapted it for a different problem: agent navigation of domain knowledge. Instead of organizing personal notes, our skill graphs organize skills, docs, agents, and API references so Claude Code agents can find what they need without reading everything. The differences from arscontexta's approach:

  • Agents are the consumers, not humans. MOCs are machine-parseable with file paths and one-line descriptions, not prose paragraphs. An agent reads a MOC and knows exactly which file to open next.
  • TLDR integration. The read hook serves MOC summaries at ~400 tokens. Agents pick their MOC from the summary without reading the full index. Navigation decisions happen at the summary level.
  • Knowledge store as leaf layer. Instead of raw file reads at the bottom, agents query domain-specific .mv2 stores for API details. A search returns scored 500-char snippets instead of dumping an entire framework doc into context.
  • Coupling detection triggers graph creation. neo-research assesses coupling during its question tree phase — if 3+ sub-topics reference each other, it recommends creating a skill graph. The /create-skill-graph command then builds the graph from the research output automatically.
  • Domain-scoped stores. Apple's 300+ framework docs are split into 15 domain stores (spatial-computing, swiftui, ml-ai, etc.) so each MOC searches only its relevant domain.

The skill graph tools ship as two separate skills:

  • skills/create-skill-graph/SKILL.md — The /create-skill-graph command. Works from research output (question-tree.md branches as MOC candidates) or from existing files (inventories .claude/commands/, .claude/skills/, .claude/docs/, .claude/agents/, clusters by topic).
  • skills/traverse-template/SKILL.md — Template for the traversal protocol included in every graph. Teaches agents the 3-read progressive disclosure pattern. Agent-only, not user-invocable.

The release skill at skills/release/SKILL.md handles semver tagging, changelog generation, and GitHub Release creation for any plugin in the collection.

The first graph built with /create-skill-graph covers iOS/visionOS/Swift development — 10 MOCs mapping 35+ files across 8 command skills, 2 agents, 24 docs, and 15 domain knowledge stores.

Orchestration (preview)

/orchestrate is an automated pipeline that chains these plugins into a single execution flow. It takes output from /interactive-planning and runs it through plan review, skill matching, worktree isolation, parallel agent dispatch, testing, code review, and incremental commits — hands-off from plan to merged code.

Not yet packaged as a standalone plugin. Currently runs as a personal workflow on top of the installed plugins above. The plan is to ship it once the remaining dependencies (code-review, commit-split) are also pluginized.

Full pipeline breakdown

How They Work Together

These plugins aren't just a collection — they compose into a pipeline. The orchestrator consumes output from each plugin at different stages:

graph LR
    IP["interactive-planning"] --> O["/orchestrate"]
    MA["mercator-ai"] --> O
    CH["chronicler"] --> O
    TLDR["code-simplifier-tldr"] --> O
    AR["agent-reverse"] --> O
    NR["neo-research"] --> SG["skill graph"]
    SG --> O
    O --> OB["orbit"]
    O --> Ship["commit + merge"]
Loading
Pipeline Stage What happens Plugins used
Plan User creates phased plan with tasks, specs, and dependencies interactive-planning
Ingest Reads plan + project context (codebase map, tech docs, AST summaries) mercator-ai, chronicler, code-simplifier-tldr
Match Finds the right skills and agent types for each phase agent-reverse
Research Fetches official docs for unfamiliar tech before agents write code neo-research, Context7
Graph If coupling detected, agents navigate domain knowledge via skill graph MOCs instead of linear file reads neo-research → /create-skill-graph
Gate Shows full execution plan, gets user approval before touching code
Execute Creates git worktree per phase, dispatches 2-3 agents in parallel
Test Runs tests in an isolated environment per phase orbit
Review Automated code review, auto-fixes critical issues code-review (coming soon)
Commit Incremental commits per phase, merge back to feature branch commit-split (coming soon)

Stages marked "coming soon" work today as personal skills — they'll become installable plugins in a future release.

Context Window Management

A recurring problem with AI agents: they burn through context reading full source files, then lose track of earlier work when the window fills up. These plugins include a built-in protocol to prevent that.

The TLDR Read Protocol

A PreToolUse hook intercepts every Read tool call. Instead of returning the full file, it returns an AST summary — function signatures, class shapes, imports, key types — at ~200 tokens per file instead of thousands. Agents get enough structure to navigate and decide what to look at. When they need the actual code (to edit a specific function, for example), they request a line range, which bypasses the hook.

How it works:

  1. Agent calls Read on a file
  2. Hook checks the TLDR cache (keyed by merkle hash from mercator-ai, or MD5 fallback)
  3. Cache hit → returns the AST summary immediately
  4. Cache miss → generates a language-specific summary (Python, TypeScript, Swift, Markdown all have dedicated parsers), caches it, returns the summary
  5. Agent requests offset/limit → hook steps aside, full content returned

What gets summarized:

  • Python — imports, constants, class/function signatures with type hints
  • TypeScript/JavaScript — exports, classes with methods, functions, types/interfaces, enums
  • Swift — structs/classes/enums/protocols, properties with types, full function signatures
  • Markdown — table of contents, headings, code block languages, key terms, links

What bypasses the hook:

  • Small files (<100 lines) — not worth summarizing
  • Config files (JSON, YAML, TOML, lock files) — structure is the content
  • Test files — agents need full assertions to verify behavior
  • Line-range requests — the agent already knows what it wants

Token savings in practice

Scenario Without TLDR With TLDR Savings
Read one large file ~2,500 tokens ~200 tokens 92%
Survey 100-file codebase ~250,000 tokens ~20,000 tokens 92%
Merkle diff + TLDR (3 files changed out of 100) ~250,000 tokens ~650 tokens 99.7%

The merkle integration is the big win. When mercator-ai's manifest tells code-simplifier-tldr which files actually changed, the agent skips everything else entirely. On a 100-file codebase where 3 files changed, you go from 250k tokens to 650.

Setup

The hook is configured in .claude/settings.json:

{
  "hooks": {
    "PreToolUse": [
      {
        "matcher": "Read",
        "hooks": [
          {
            "type": "command",
            "command": ".claude/hooks/tldr-read-enforcer.sh"
          }
        ]
      }
    ]
  }
}

Cache lives at .claude/cache/tldr/. Files are named by hash for O(1) lookup. The cache self-populates on first read and invalidates when the merkle hash or file content changes.

Update

/plugin marketplace update shihwesley-plugins

Individual plugins version independently. Push a fix to a plugin's repo and users pick it up on their next marketplace update — no changes needed here.

License

MIT