Skip to content

0xChrisHui/cc-vellum

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Language / 语言: English | 中文


cc-vellum

An unofficial Claude Code Stop hook that durably archives conversations and generates AI summaries.

Two kinds of pain this solves:

  1. Crash recovery — if Claude Code or your terminal dies mid-session, the transcript has already been written to disk every turn. You don't lose the conversation, just the live continuation.
  2. Corpus accumulation — over time you build a searchable, summarized archive of every coding conversation. Handy for content creators, for remembering "how did I solve that last month?", or for grepping your own prior art before asking again.

Every turn of every Claude Code session is written to a local Markdown archive. When a session goes idle, an AI summarizer assigns it a title, importance score, and one-paragraph summary, then renames the file and updates a per-project catalog. The summarizer is pluggable — use your Claude Code subscription, OpenAI (or an OpenAI-compatible endpoint like DeepSeek / Groq), a local Ollama model, or a custom command.

  • Zero runtime dependencies (pure Node.js, CommonJS, no build step)
  • Session files are plain Markdown with YAML front-matter
  • Best-effort regex redaction of common secret formats before anything is written to disk (see Security below for exactly what's covered — treat archives as sensitive regardless)
  • Safe to install alongside other Claude Code hooks — setup / uninstall only manage entries they created

cc-vellum is not affiliated with Anthropic or with Vellum AI (vellum.ai / the vellum npm package). The cc- prefix marks it as a third-party Claude Code tool; the vellum part is a nod to the parchment that kept medieval manuscripts legible a thousand years later.

Platform status for v0.1.0: developed and tested on Windows 11 with Git Bash. Linux and macOS should work (the code branches on process.platform where needed) but are not yet verified end-to-end — if you try it on another OS, please open an issue with what you see.

Install

npm install -g cc-vellum
cc-vellum setup

setup is interactive. It will:

  1. Auto-detect your Claude Code CLI and (on Windows) Git Bash paths.
  2. Ask which AI summary provider to use.
  3. Write ~/.cc-vellum.json.
  4. Create a stable wrapper at ~/.cc-vellum/hook-stop.js and register it as a Stop hook in ~/.claude/settings.json.
  5. Create the log directory (default ~/.cc-vellum-logs).
  6. If PM2 is installed, register a cc-vellum-finalize cron job for automatic summarization. PM2 is optional — cc-vellum finalize --now always works.

For a non-interactive install (scripts, CI):

cc-vellum setup --yes --provider claude-cli --model haiku --log-dir ~/.cc-vellum-state --no-pm2

How it works

 Claude Code turn ends
        │
        ▼
 ~/.cc-vellum/hook-stop.js         ← stable wrapper registered in settings.json
        │   require()
        ▼
 lib/chat-logger.js                    ← writes <project>/.claude/logs/NNN_YY_MMddHH.md
        │                                 each turn, plus a pending record
        ▼
    (later, on idle)
        │
        ▼
 cc-vellum finalize   [or PM2 cron]
        │
        ▼
 lib/summarize-session.js              ← calls AI provider, validates result,
                                         renames to NNN_YY_start-end_slug.md,
                                         updates per-project catalog.md + index.md

The hook itself is hot-path code: it only writes the turn to disk. All AI calls happen later, driven by finalize.

The wrapper path never changes even if you reinstall the npm package or switch Node versions with nvm/fnm/volta — only the require(...) inside the wrapper is updated on the next setup. If the wrapper target ever goes missing, the hook writes to a debug log and exits silently so Claude Code is never blocked.

AI providers

Pick one at setup time, or later with cc-vellum config summaryProvider <name>.

claude-cli (default)

Shells out to your already-logged-in Claude Code CLI. No API key needed.

  • Cost: consumes your Claude Code subscription quota
  • Config: summaryModel (haiku / sonnet / opus)

openai

POST {baseUrl}/chat/completions with Authorization: Bearer <key> and response_format: { type: "json_object" }. Works with any backend that speaks this exact shape.

  • Cost: consumes your OpenAI (or compatible provider) API credits
  • Config: openaiApiKey, openaiBaseUrl, summaryModel (e.g. gpt-4o-mini)
  • Tested with OpenAI, DeepSeek, and Groq via openaiBaseUrl. Other OpenAI-compatible gateways (vLLM, LiteLLM, etc.) likely work too as long as they honor the same endpoint, auth header, and json_object response format. Azure OpenAI is not supported in v0.1.0 — its URL template and auth are different; use the custom provider or a LiteLLM-style shim in front.
  • Set the key with cc-vellum config openaiApiKey — this opens an interactive prompt so the value never enters your shell history. The key is masked (sk-****abcd) whenever cc-vellum config prints it.

lmstudio

Calls a local LM Studio server via its OpenAI-compatible endpoint. No API key required.

  • Cost: free (local inference)
  • Config: lmstudioBaseUrl (default http://localhost:1234/v1), summaryModel
  • Requires LM Studio running with a model loaded

ollama

Calls a local Ollama HTTP server.

  • Cost: free (local inference)
  • Config: ollamaHost (default http://localhost:11434), summaryModel (e.g. qwen2.5:7b, llama3, mistral)
  • Requires Ollama running and the model pre-pulled

custom

Runs any command you provide. The prompt is piped to stdin; the command must write a JSON object {"title": "...", "summary": "...", "importance": N} to stdout.

  • Cost: whatever your command does
  • Config: customCommand as an array is recommended (["python", "summarize.py"]); strings are also accepted but go through the platform shell

All five providers go through the same validateResult layer — malformed or empty output is treated as a retryable failure and never written to disk.

Usage

cc-vellum finalize          # process sessions idle > idleThresholdHours
cc-vellum finalize --now    # process every pending session right now
cc-vellum status            # list pending sessions and their state
cc-vellum config            # print all config (secrets masked)
cc-vellum config <key>      # print one value
cc-vellum config <key> <v>  # set one value
cc-vellum doctor            # 7-point health check with auto-repair
cc-vellum uninstall         # remove hooks and PM2 job (keeps log data)
cc-vellum migrate-from-claude-hooks
                            # move legacy ~/.claude-hooks* state to cc-vellum

cc-vellum doctor checks, and will auto-repair if possible: the config file, the hook wrapper, the wrapper's require target, the settings.json registration, provider availability, the PM2 job, and the log directory.

Log layout

Session archives live inside each project (co-located with the code they describe), while the global logDir holds only cross-project state.

<project-cwd>/.claude/logs/         # per-project, auto-added to .gitignore
  001_26_041210.md                  # while recording: seq + start time
  001_26_041210-041215_slug.md      # after finalize: + end time + slug
  catalog.md                        # per-project table of contents

<logDir>/                           # global, default ~/.cc-vellum-logs
  pending/
    <session-id>.json               # cross-project pending queue
  hook-debug.log                    # debug output
  index.md                          # cross-project index of all projects

Each session Markdown file has a YAML front-matter header with the project id, session id, transcript path, date, importance score, and summary. The body is the rendered transcript at the moment the Stop hook ran.

The session md is an archive, not the ground truth. Claude Code keeps its own authoritative JSONL transcript at ~/.claude/projects/<proj>/<id>.jsonl; cc-vellum reads that file to build the md snapshot. For very long sessions (>2 MB of transcript), the hot path only reads the last 2 MB to keep per-turn overhead bounded, so the md may show just the tail of the conversation. If a single record ever exceeds the window, the hook falls back to a full read for that turn so nothing gets silently dropped. If you need the complete conversation for any reason, read Claude Code's JSONL file at the path in the front-matter.

One caveat about --now: running cc-vellum finalize --now on a session that is still active will rename its file and split the conversation — any subsequent turns will start a new archive file and be summarized separately on the next finalize run. In normal use you rarely hit this, since finalize (without --now) only processes sessions idle past idleThresholdHours.

Configuration reference

All fields are optional. ~/.cc-vellum.json only stores values that differ from the defaults, so upgrades propagate new defaults automatically.

Field Default Description
logDir ~/.cc-vellum-logs Global state dir: pending/, hook-debug.log, cross-project index.md. Not where session archives themselves live — those stay per-project under <cwd>/.claude/logs/ (see Log layout).
summaryProvider claude-cli claude-cli / openai / lmstudio / ollama / custom
summaryModel haiku Model name (meaning depends on provider)
claudeCliPath auto Path to Claude CLI, or auto to detect at runtime
bashPath auto Path to bash (Windows + claude-cli only)
openaiApiKey "" Also reads OPENAI_API_KEY env var as a fallback
openaiBaseUrl https://api.openai.com/v1 Any OpenAI-compatible endpoint
lmstudioBaseUrl http://localhost:1234/v1 LM Studio server URL
ollamaHost http://localhost:11434 Ollama server URL
customCommand [] Array of [cmd, ...args] for the custom provider
idleThresholdHours 48 How long a session must be idle before auto-summary
pm2CronSchedule 0 0 */2 * * Cron expression for the optional PM2 job
language en Summary language: en or zh
maxRetries 3 Retries on AI call failure
summaryTimeout 90000 AI call timeout in milliseconds
manageGitignore true Auto-append .claude/logs/ to project .gitignore

API keys, environment variables, and PM2

  • For manual cc-vellum finalize, OPENAI_API_KEY takes precedence over the config file.
  • For PM2-triggered runs, only the config file is read. PM2 background processes cannot reliably inherit your shell environment, so if you pick openai + PM2, setup requires the key to be written to the config file (or it will skip PM2 and tell you why).

Optional: PM2 scheduling

If you have PM2 installed, setup can register a cron job (cc-vellum-finalize) that runs finalize on a schedule. PM2 is a convenience — everything cc-vellum does remains fully available without it.

cc-vellum doctor            # will report PM2 status as OK / WARN / FAIL
cc-vellum config pm2CronSchedule "0 3 * * *"   # run daily at 03:00

On Windows you also need pm2 startup for the job to survive reboots.

Security

cc-vellum's redaction is best-effort regex scrubbing applied before any content touches disk. It is defense-in-depth, not a forensic guarantee.

What's caught (see lib/redact.js for the exact patterns):

  • Vendor-prefixed tokens: sk-…, ghp_… / gho_… / ghs_…, glpat-…, xox[abprs]-…, AKIA… / ASIA…
  • JWTs (three base64url segments)
  • PEM private-key blocks (-----BEGIN … PRIVATE KEY-----END)
  • Connection strings with inline credentials (postgres://user:pass@…, mongodb+srv://…, mysql://…, redis://…, amqp://…)
  • Authorization: Bearer … and Authorization: Basic … headers
  • Generic token/api_key/password/secret/client_secret key-value pairs in JSON, YAML, and env-style syntax

What still gets through:

  • Novel vendor-specific token formats with no known prefix
  • Plaintext secrets that don't match any of the patterns above (e.g. user-invented passwords in free-form prose)
  • Credentials inside images, PDFs, or other non-text content

Treat cc-vellum's archive directory like you'd treat your shell history or ~/.bash_history — private, useful, not something you'd publish verbatim. Per-project archives live under <cwd>/.claude/logs/ and the Stop hook auto-adds that path to your .gitignore, so they don't accidentally land in git.

If you hit a pattern that should be redacted but isn't, please open an issue with a sanitized example.

Uninstall

cc-vellum uninstall         # asks for confirmation
cc-vellum uninstall --yes   # no prompt

This removes the settings.json Stop hook entry, the wrapper at ~/.cc-vellum/hook-stop.js, the PM2 job, and the config file. It does not touch logDir — your session archives are kept and the path is printed so you can remove them yourself if you want. Then run npm uninstall -g cc-vellum to remove the CLI.

uninstall only removes hook entries it created — other Claude Code hooks you or another tool have registered are left untouched.

Requirements

  • Node.js 18 or later
  • Claude Code CLI (for logging to work at all, and for the default claude-cli summary provider)
  • On Windows, Git Bash is auto-detected for the claude-cli provider

License

MIT


cc-vellum 中文

一个非官方的 Claude Code Stop 钩子,持久化归档对话并生成 AI 摘要。

它解决两类痛点:

  1. 崩溃恢复 —— 如果 Claude Code 或终端在会话中途崩溃,对话记录已在每轮结束时写入磁盘。你不会丢失对话内容,只是无法继续当前会话。
  2. 语料积累 —— 随着时间推移,你会积累一个可搜索、带摘要的编程对话归档。适合内容创作者、回忆"上个月我是怎么解决那个问题的",或者在重复提问之前先搜索自己的历史记录。

每一轮 Claude Code 会话都会被写入本地 Markdown 归档。当会话进入空闲状态后,AI 摘要器会为其分配标题、重要性评分和一段摘要,然后重命名文件并更新项目目录。摘要器支持多种后端 —— 使用你的 Claude Code 订阅、OpenAI(或 DeepSeek / Groq 等 OpenAI 兼容端点)、本地 Ollama 模型或自定义命令。

  • 零运行时依赖(纯 Node.js、CommonJS,无需构建步骤)
  • 会话文件为带 YAML front-matter 的纯 Markdown
  • 写入磁盘前,对常见密钥格式进行尽力正则脱敏(详见安全性章节 —— 无论如何请将归档视为敏感文件)
  • 可安全地与其他 Claude Code 钩子共存 —— setup / uninstall 只管理自己创建的条目

cc-vellum 与 Anthropic 或 Vellum AI(vellum.ai / vellum npm 包)无关。cc- 前缀表示这是第三方 Claude Code 工具;vellum 取自中世纪羊皮纸 —— 让手稿历经千年依然可读。

v0.1.0 平台状态:在 Windows 11 + Git Bash 上开发和测试。Linux 和 macOS 应当可用(代码中按 process.platform 做了分支处理),但尚未端到端验证 —— 如果你在其他系统上尝试,欢迎提 issue 反馈。

安装

npm install -g cc-vellum
cc-vellum setup

setup 为交互式安装,它会:

  1. 自动检测 Claude Code CLI 路径(Windows 上还会检测 Git Bash 路径)。
  2. 询问使用哪个 AI 摘要后端。
  3. 写入 ~/.cc-vellum.json 配置文件。
  4. ~/.cc-vellum/hook-stop.js 创建稳定的入口包装器,并在 ~/.claude/settings.json 中注册为 Stop 钩子。
  5. 创建日志目录(默认 ~/.cc-vellum-logs)。
  6. 如果安装了 PM2,注册一个 cc-vellum-finalize 定时任务用于自动摘要。PM2 为可选项 —— cc-vellum finalize --now 随时可用。

非交互式安装(脚本 / CI):

cc-vellum setup --yes --provider claude-cli --model haiku --log-dir ~/.cc-vellum-state --no-pm2

工作原理

 Claude Code 回合结束
        │
        ▼
 ~/.cc-vellum/hook-stop.js         ← 注册在 settings.json 中的稳定包装器
        │   require()
        ▼
 lib/chat-logger.js                    ← 写入 <project>/.claude/logs/NNN_YY_MMddHH.md
        │                                 每轮写入,同时创建待处理记录
        ▼
    (稍后,空闲时)
        │
        ▼
 cc-vellum finalize   [或 PM2 定时任务]
        │
        ▼
 lib/summarize-session.js              ← 调用 AI 后端,验证结果,
                                         重命名为 NNN_YY_start-end_slug.md,
                                         更新项目 catalog.md + index.md

钩子本身是热路径代码:它只负责将当前轮次写入磁盘。所有 AI 调用都在后续由 finalize 驱动。

包装器路径永远不变,即使你重装 npm 包或通过 nvm/fnm/volta 切换 Node 版本 —— 下次运行 setup 时只会更新包装器内部的 require(...) 路径。如果包装器目标缺失,钩子会写入调试日志后静默退出,绝不会阻塞 Claude Code。

AI 后端

setup 时选择,或之后通过 cc-vellum config summaryProvider <name> 修改。

claude-cli(默认)

调用你已登录的 Claude Code CLI。无需 API 密钥。

  • 费用:消耗 Claude Code 订阅配额
  • 配置:summaryModelhaiku / sonnet / opus

openai

POST {baseUrl}/chat/completions,使用 Authorization: Bearer <key>response_format: { type: "json_object" }。兼容任何支持该接口的后端。

  • 费用:消耗 OpenAI(或兼容服务商)API 额度
  • 配置:openaiApiKeyopenaiBaseUrlsummaryModel(如 gpt-4o-mini
  • 已测试:OpenAI、DeepSeek、Groq(通过 openaiBaseUrl)。其他 OpenAI 兼容网关(vLLM、LiteLLM 等)只要支持相同的端点、认证头和 json_object 响应格式,应当也可以正常工作。v0.1.0 不支持 Azure OpenAI —— 其 URL 模板和认证方式不同;请使用 custom 后端或在前面加一层 LiteLLM 代理。
  • 通过 cc-vellum config openaiApiKey 设置密钥 —— 会打开交互式提示,密钥不会进入 shell 历史。cc-vellum config 打印时密钥会被掩码显示(sk-****abcd)。

lmstudio

调用本地 LM Studio 服务,使用其 OpenAI 兼容端点。无需 API 密钥。

  • 费用:免费(本地推理)
  • 配置:lmstudioBaseUrl(默认 http://localhost:1234/v1)、summaryModel
  • 需要 LM Studio 正在运行且已加载模型

ollama

调用本地 Ollama HTTP 服务。

  • 费用:免费(本地推理)
  • 配置:ollamaHost(默认 http://localhost:11434)、summaryModel(如 qwen2.5:7bllama3mistral
  • 需要 Ollama 正在运行且模型已提前拉取

custom

运行你提供的任意命令。提示词通过 stdin 传入;命令需向 stdout 写入 JSON 对象 {"title": "...", "summary": "...", "importance": N}

  • 费用:取决于你的命令
  • 配置:推荐用数组形式 customCommand["python", "summarize.py"]);字符串也可以但会通过平台 shell 执行

以上五种后端都经过同一个 validateResult 层 —— 格式错误或空输出被视为可重试失败,绝不会写入磁盘。

使用方法

cc-vellum finalize          # 处理空闲超过 idleThresholdHours 的会话
cc-vellum finalize --now    # 立即处理所有待处理会话
cc-vellum status            # 列出待处理会话及其状态
cc-vellum config            # 打印所有配置(密钥已掩码)
cc-vellum config <key>      # 打印某个配置值
cc-vellum config <key> <v>  # 设置某个配置值
cc-vellum doctor            # 7 项健康检查,支持自动修复
cc-vellum uninstall         # 移除钩子和 PM2 任务(保留日志数据)
cc-vellum migrate-from-claude-hooks
                            # 将旧版 ~/.claude-hooks* 状态迁移到 cc-vellum

cc-vellum doctor 会检查并尽可能自动修复:配置文件、钩子包装器、包装器的 require 目标、settings.json 注册、后端可用性、PM2 任务和日志目录。

日志结构

会话归档存放在每个项目内部(与它们描述的代码放在一起),全局 logDir 只存储跨项目状态。

<project-cwd>/.claude/logs/         # 项目级,自动添加到 .gitignore
  001_26_041210.md                  # 录制中:序号 + 开始时间
  001_26_041210-041215_slug.md      # finalize 后:+ 结束时间 + 简称
  catalog.md                        # 项目级目录

<logDir>/                           # 全局,默认 ~/.cc-vellum-logs
  pending/
    <session-id>.json               # 跨项目待处理队列
  hook-debug.log                    # 调试输出
  index.md                          # 所有项目的跨项目索引

每个会话 Markdown 文件都有 YAML front-matter 头,包含项目 ID、会话 ID、对话记录路径、日期、重要性评分和摘要。正文是 Stop 钩子运行时的对话快照。

会话 md 是归档,不是真实数据源。 Claude Code 的权威 JSONL 对话记录位于 ~/.claude/projects/<proj>/<id>.jsonl;cc-vellum 读取该文件来生成 md 快照。对于超长会话(对话记录 >2 MB),热路径只读取最后 2 MB 以控制每轮开销,因此 md 可能只显示对话的尾部。如果单条记录超过窗口大小,钩子会回退到完整读取以避免静默丢弃数据。如需完整对话,请读取 front-matter 中的 JSONL 文件路径。

关于 --now 的注意事项:对仍在活动的会话运行 cc-vellum finalize --now 会重命名其文件并分割对话 —— 后续轮次将开始新的归档文件,并在下次 finalize 时单独生成摘要。正常使用中很少遇到这种情况,因为不带 --nowfinalize 只处理空闲超过 idleThresholdHours 的会话。

配置参考

所有字段均为可选。~/.cc-vellum.json 只存储与默认值不同的配置,因此升级会自动继承新的默认值。

字段 默认值 说明
logDir ~/.cc-vellum-logs 全局状态目录:pending/hook-debug.log、跨项目 index.md不是会话归档的存放位置 —— 归档在各项目的 <cwd>/.claude/logs/ 下(见日志结构)。
summaryProvider claude-cli claude-cli / openai / lmstudio / ollama / custom
summaryModel haiku 模型名称(含义取决于后端)
claudeCliPath auto Claude CLI 路径,auto 为运行时自动检测
bashPath auto bash 路径(仅 Windows + claude-cli 需要)
openaiApiKey "" 也会读取 OPENAI_API_KEY 环境变量作为回退
openaiBaseUrl https://api.openai.com/v1 任何 OpenAI 兼容端点
lmstudioBaseUrl http://localhost:1234/v1 LM Studio 服务地址
ollamaHost http://localhost:11434 Ollama 服务地址
customCommand [] [cmd, ...args] 数组,用于 custom 后端
idleThresholdHours 48 会话空闲多久后自动摘要
pm2CronSchedule 0 0 */2 * * 可选 PM2 任务的 cron 表达式
language en 摘要语言:enzh
maxRetries 3 AI 调用失败重试次数
summaryTimeout 90000 AI 调用超时(毫秒)
manageGitignore true 自动将 .claude/logs/ 添加到项目 .gitignore

API 密钥、环境变量与 PM2

  • 手动 cc-vellum finalize 时,OPENAI_API_KEY 环境变量优先于配置文件。
  • PM2 触发的运行只读取配置文件。PM2 后台进程无法可靠继承你的 shell 环境变量,因此如果你选择 openai + PM2,setup 会要求将密钥写入配置文件(否则会跳过 PM2 并说明原因)。

可选:PM2 定时任务

如果安装了 PM2setup 可以注册一个定时任务(cc-vellum-finalize)按计划运行 finalize。PM2 只是便利工具 —— cc-vellum 的所有功能在没有 PM2 的情况下也完全可用。

cc-vellum doctor            # 会报告 PM2 状态为 OK / WARN / FAIL
cc-vellum config pm2CronSchedule "0 3 * * *"   # 每天凌晨 3 点运行

在 Windows 上还需要 pm2 startup 才能让任务在重启后继续运行。

安全性

cc-vellum 的脱敏是尽力而为的正则清洗,在任何内容写入磁盘之前执行。这是纵深防御措施,不是完备的安全保证。

可捕获的内容(详见 lib/redact.js 中的具体模式):

  • 带厂商前缀的令牌:sk-…ghp_… / gho_… / ghs_…glpat-…xox[abprs]-…AKIA… / ASIA…
  • JWT(三段 base64url 格式)
  • PEM 私钥块(-----BEGIN … PRIVATE KEY-----END
  • 内联凭据的连接字符串(postgres://user:pass@…mongodb+srv://…mysql://…redis://…amqp://…
  • Authorization: Bearer …Authorization: Basic …
  • JSON、YAML 和 env 格式中的通用 token/api_key/password/secret/client_secret 键值对

仍会通过的内容

  • 没有已知前缀的新厂商令牌格式
  • 不匹配任何上述模式的明文密钥(如用户自创的自由文本密码)
  • 图片、PDF 或其他非文本内容中的凭据

请像对待 shell 历史或 ~/.bash_history 一样对待 cc-vellum 的归档目录 —— 私密、有用,但不要原样发布。项目级归档位于 <cwd>/.claude/logs/ 下,Stop 钩子会自动将该路径添加到 .gitignore,防止意外提交到 git。

如果你遇到应被脱敏但未被捕获的模式,欢迎提 issue 并附上脱敏后的示例。

卸载

cc-vellum uninstall         # 会请求确认
cc-vellum uninstall --yes   # 无确认提示

这会移除 settings.json 中的 Stop 钩子条目、~/.cc-vellum/hook-stop.js 包装器、PM2 任务和配置文件。不会删除 logDir —— 你的会话归档会保留,路径会打印出来以便你自行决定是否删除。之后运行 npm uninstall -g cc-vellum 移除 CLI。

uninstall 只移除自己创建的钩子条目 —— 你或其他工具注册的 Claude Code 钩子不受影响。

系统要求

  • Node.js 18 或更高版本
  • Claude Code CLI(日志功能的前提条件,也是默认 claude-cli 摘要后端的需要)
  • Windows 上,claude-cli 后端会自动检测 Git Bash

许可证

MIT

About

cc-vellum — digital vellum for Claude Code conversations. A Stop hook that durably archives every turn and generates AI summaries. Zero dependencies, pluggable providers (Claude CLI, OpenAI, Ollama).

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors