Complete reference for all Capstan framework packages. The AI agent package is documented first and in greatest detail; other packages follow in condensed form.
Standalone AI toolkit. Works independently or with the Capstan framework. Includes the smart agent loop, tool validation, token budgets, skills, memory, evolution, compression, harness mode, and utility functions.
Create a fully-configured smart agent with tool validation, token budgets, skills, evolution, and lifecycle hooks.
function createSmartAgent(config: SmartAgentConfig): SmartAgentSmartAgentConfig — full configuration:
interface SmartAgentConfig {
/** Primary LLM provider (required). */
llm: LLMProvider;
/** Available tools the agent can invoke (required). */
tools: AgentTool[];
/** Background tasks submitted via the task fabric. */
tasks?: AgentTask[] | undefined;
/** Memory backend and scoping configuration. */
memory?: SmartAgentMemoryConfig | undefined;
/** System prompt layering via the prompt composer. */
prompt?: PromptComposerConfig | undefined;
/** Post-response evaluators that can force re-prompting. */
stopHooks?: StopHook[] | undefined;
/** Maximum loop iterations before returning "max_iterations". */
maxIterations?: number | undefined;
/** Context window size in tokens for compression decisions. */
contextWindowSize?: number | undefined;
/** Compaction strategies for managing context length. */
compaction?: Partial<{
snip: SnipConfig;
microcompact: MicrocompactConfig;
autocompact: AutocompactConfig;
}> | undefined;
/** Concurrent tool execution configuration. */
streaming?: StreamingExecutorConfig | undefined;
/** Deferred tool loading when tool count exceeds threshold. */
toolCatalog?: ToolCatalogConfig | undefined;
/** Lifecycle hooks for observability and policy enforcement. */
hooks?: SmartAgentHooks | undefined;
/** Fallback LLM provider used when the primary fails. */
fallbackLlm?: LLMProvider | undefined;
/** Per-turn output token budget. Plain number treated as maxOutputTokensPerTurn. */
tokenBudget?: number | TokenBudgetConfig | undefined;
/** Tool result size limits and overflow persistence. */
toolResultBudget?: ToolResultBudgetConfig | undefined;
/** Registered skills the agent can activate at runtime. */
skills?: AgentSkill[] | undefined;
/** Self-evolution configuration: experience capture, distillation, pruning. */
evolution?: EvolutionConfig | undefined;
/** Timeout and stall detection for LLM calls. */
llmTimeout?: LLMTimeoutConfig | undefined;
}Usage:
import { createSmartAgent, defineSkill } from "@zauso-ai/capstan-ai";
const agent = createSmartAgent({
llm: myProvider,
tools: [readFile, writeFile],
maxIterations: 20,
fallbackLlm: cheaperProvider,
tokenBudget: { maxOutputTokensPerTurn: 8192, nudgeAtPercent: 85 },
toolResultBudget: { maxChars: 50_000, persistDir: "./overflow" },
llmTimeout: { chatTimeoutMs: 120_000, streamIdleTimeoutMs: 90_000 },
skills: [codeReviewSkill],
evolution: {
store: myEvolutionStore,
capture: "every-run",
distillation: "post-run",
},
});
const result = await agent.run("Refactor the auth module");The agent interface returned by createSmartAgent.
interface SmartAgent {
/** Execute a goal from scratch. */
run(goal: string): Promise<AgentRunResult>;
/** Resume from a saved checkpoint with a new user message. */
resume(checkpoint: AgentCheckpoint, message: string): Promise<AgentRunResult>;
}Tool definition with optional input validation and per-tool timeout.
interface AgentTool {
/** Unique tool identifier. */
name: string;
/** LLM-facing description of what this tool does. */
description: string;
/** JSON Schema for tool input arguments. */
parameters?: Record<string, unknown> | undefined;
/** Whether this tool is safe for parallel execution (default: false). */
isConcurrencySafe?: boolean | undefined;
/**
* Failure handling mode.
* - "soft": error is reported to the LLM, loop continues.
* - "hard": error aborts the run immediately.
*/
failureMode?: "soft" | "hard" | undefined;
/** Tool implementation. Receives validated arguments. */
execute(args: Record<string, unknown>): Promise<unknown>;
/**
* Pre-execution validation. Runs before execute().
* If { valid: false }, the tool call is rejected without executing.
*/
validate?: ((args: Record<string, unknown>) => {
valid: boolean;
error?: string;
}) | undefined;
/** Per-tool timeout in milliseconds. Aborts execution if exceeded. */
timeout?: number | undefined;
}Background task submitted via the task fabric.
type AgentTaskKind = "shell" | "workflow" | "remote" | "subagent" | "custom";
interface AgentTaskExecutionContext {
signal: AbortSignal;
runId?: string | undefined;
requestId: string;
taskId: string;
order: number;
callStack?: ReadonlySet<string> | undefined;
}
interface AgentTask {
name: string;
description: string;
kind?: AgentTaskKind | undefined;
parameters?: Record<string, unknown> | undefined;
isConcurrencySafe?: boolean | undefined;
failureMode?: "soft" | "hard" | undefined;
execute(
args: Record<string, unknown>,
context: AgentTaskExecutionContext,
): Promise<unknown>;
}Task factory helpers:
import {
createShellTask,
createWorkflowTask,
createRemoteTask,
createSubagentTask,
} from "@zauso-ai/capstan-ai";A skill is a high-level strategy that the agent can activate at runtime. Unlike tools (operations with defined inputs/outputs), skills inject strategic guidance into the conversation.
interface AgentSkill {
/** Unique skill identifier. */
name: string;
/** What the skill does. */
description: string;
/** When to use this skill (shown in system prompt). */
trigger: string;
/** Guidance text injected into the conversation on activation. */
prompt: string;
/** Preferred tool names when this skill is active. */
tools?: string[] | undefined;
/** Origin: hand-authored or auto-promoted from evolution. */
source?: "developer" | "evolved" | undefined;
/** Effectiveness score (0.0 - 1.0). */
utility?: number | undefined;
/** Arbitrary extra data. */
metadata?: Record<string, unknown> | undefined;
}Create a skill with sensible defaults (source: "developer", utility: 1.0).
function defineSkill(def: AgentSkill): AgentSkillUsage:
import { defineSkill } from "@zauso-ai/capstan-ai";
const codeReviewSkill = defineSkill({
name: "code-review",
description: "Systematic code review with security and performance checks",
trigger: "When reviewing code changes or pull requests",
prompt: "Follow this review checklist: 1) Security vulnerabilities...",
tools: ["read_file", "grep_codebase"],
});Create a meta-tool named activate_skill that lets the agent activate a skill by name during a run.
function createActivateSkillTool(skills: AgentSkill[]): AgentToolThe returned tool is concurrency-safe and uses soft failure mode. When invoked with { skill_name: "..." }, it returns the skill's description, guidance (prompt text), and preferredTools list.
Format skill descriptions for inclusion in the system prompt. Returns a markdown block listing available skills and their triggers. Returns an empty string when the skills array is empty.
function formatSkillDescriptions(skills: AgentSkill[]): stringThe provider interface used by the agent loop, think/generate, and distiller.
interface LLMProvider {
/** Provider name (e.g. "openai", "anthropic"). */
name: string;
/** Send messages and receive a complete response. */
chat(messages: LLMMessage[], options?: LLMOptions): Promise<LLMResponse>;
/** Stream response tokens. Optional — not all providers support streaming. */
stream?(
messages: LLMMessage[],
options?: LLMOptions,
): AsyncIterable<LLMStreamChunk>;
}
interface LLMMessage {
role: "system" | "user" | "assistant";
content: string;
}
interface LLMResponse {
content: string;
model: string;
usage?: {
promptTokens: number;
completionTokens: number;
totalTokens: number;
} | undefined;
finishReason?: string | undefined;
}
interface LLMStreamChunk {
content: string;
done: boolean;
finishReason?: string | undefined;
}Options passed to LLMProvider.chat() and LLMProvider.stream().
interface LLMOptions {
/** Override model for this call. */
model?: string | undefined;
/** Sampling temperature. */
temperature?: number | undefined;
/** Maximum tokens to generate. */
maxTokens?: number | undefined;
/** System prompt (prepended as a system message). */
systemPrompt?: string | undefined;
/** Response format hint (e.g. for JSON mode). */
responseFormat?: Record<string, unknown> | undefined;
/** AbortSignal for cancellation support. */
signal?: AbortSignal | undefined;
}Timeout and stall detection for LLM calls.
interface LLMTimeoutConfig {
/** Max wait for chat() response. Default: 120_000 (2 minutes). */
chatTimeoutMs?: number | undefined;
/** Max idle gap between stream chunks. Default: 90_000 (90 seconds). */
streamIdleTimeoutMs?: number | undefined;
/** Emit warning after this idle gap. Default: 30_000 (30 seconds). */
stallWarningMs?: number | undefined;
}Per-turn output token budget with nudge behavior.
interface TokenBudgetConfig {
/** Hard cap on output tokens per LLM call. */
maxOutputTokensPerTurn: number;
/** Inject a "wrapping up" nudge at this percentage of budget. */
nudgeAtPercent?: number | undefined;
}When SmartAgentConfig.tokenBudget is set to a plain number, it is treated as { maxOutputTokensPerTurn: n }.
Limits on tool result sizes to prevent context overflow.
interface ToolResultBudgetConfig {
/** Max characters per individual tool result. */
maxChars: number;
/** Attempt to preserve JSON structure when truncating. */
preserveStructure?: boolean | undefined;
/** Directory to persist overflow results to disk. Oversized results are
* written to a file and replaced with a reference in the conversation. */
persistDir?: string | undefined;
/** Total characters across all tool results per iteration. Default: 200_000. */
maxAggregateCharsPerIteration?: number | undefined;
}Lifecycle hooks for observability, policy enforcement, and post-run processing.
interface SmartAgentHooks {
/** Before each tool execution. Return { allowed: false } to block. */
beforeToolCall?: ((
tool: string,
args: unknown,
) => Promise<{ allowed: boolean; reason?: string | undefined }>) | undefined;
/** After each tool execution. Receives status indicating success or error. */
afterToolCall?: ((
tool: string,
args: unknown,
result: unknown,
status: "success" | "error",
) => Promise<void>) | undefined;
/** Before each background task submission. Return { allowed: false } to block. */
beforeTaskCall?: ((
task: string,
args: unknown,
) => Promise<{ allowed: boolean; reason?: string | undefined }>) | undefined;
/** After each background task completes. */
afterTaskCall?: ((
task: string,
args: unknown,
result: unknown,
) => Promise<void>) | undefined;
/** After each checkpoint is created. May mutate and return it. */
onCheckpoint?: ((
checkpoint: AgentCheckpoint,
) => Promise<AgentCheckpoint | void>) | undefined;
/** When a memory-worthy event occurs. */
onMemoryEvent?: ((content: string) => Promise<void>) | undefined;
/** At each phase boundary. Return "pause" or "cancel" to interrupt the run. */
getControlState?: ((
phase: "before_llm" | "before_tool" | "after_tool" | "during_task_wait",
checkpoint: AgentCheckpoint,
) => Promise<{
action: "continue" | "pause" | "cancel";
reason?: string | undefined;
}>) | undefined;
/** After the run finishes (any status). Useful for logging or evolution. */
onRunComplete?: ((result: AgentRunResult) => Promise<void>) | undefined;
/** After each loop iteration. Receives a snapshot with token estimates. */
afterIteration?: ((snapshot: IterationSnapshot) => Promise<void>) | undefined;
}| Hook | When it fires |
|---|---|
beforeToolCall |
Before each tool execution. Return allowed: false to block. |
afterToolCall |
After each tool execution. Receives status indicating success or error. |
beforeTaskCall |
Before each background task submission. |
afterTaskCall |
After each background task completes. |
onCheckpoint |
After each checkpoint is created. May mutate and return it. |
onMemoryEvent |
When a memory-worthy event occurs. |
getControlState |
At each phase boundary. Return "pause" or "cancel" to interrupt. |
onRunComplete |
After the run finishes (any status). |
afterIteration |
After each loop iteration with token estimates. |
Result returned by SmartAgent.run() and SmartAgent.resume().
type AgentRunStatus =
| "completed"
| "max_iterations"
| "approval_required"
| "paused"
| "canceled"
| "fatal";
interface AgentRunResult {
/** The agent's final output (text or structured). */
result: unknown;
/** Number of loop iterations executed. */
iterations: number;
/** All tool calls made during the run. */
toolCalls: AgentToolCallRecord[];
/** All task calls made during the run. */
taskCalls: AgentTaskCallRecord[];
/** Terminal status of the run. */
status: AgentRunStatus;
/** Error message when status is "fatal". */
error?: string | undefined;
/** Resumable checkpoint (present for paused/canceled/approval_required). */
checkpoint?: AgentCheckpoint | undefined;
/** Details of the blocked approval (when status is "approval_required"). */
pendingApproval?: {
kind: "tool" | "task";
tool: string;
args: unknown;
reason: string;
} | undefined;
}
interface AgentToolCallRecord {
tool: string;
args: unknown;
result: unknown;
requestId?: string | undefined;
order?: number | undefined;
status?: "success" | "error" | undefined;
}
interface AgentTaskCallRecord {
task: string;
args: unknown;
result: unknown;
requestId?: string | undefined;
taskId?: string | undefined;
order?: number | undefined;
status?: "success" | "error" | "canceled" | undefined;
kind?: AgentTaskKind | undefined;
}Serializable checkpoint for pause/resume workflows.
interface AgentCheckpoint {
/** Current stage of the agent run. */
stage:
| "initialized"
| "tool_result"
| "task_wait"
| "approval_required"
| "paused"
| "completed"
| "max_iterations"
| "canceled";
/** The original goal. */
goal: string;
/** Full message history at checkpoint time. */
messages: LLMMessage[];
/** Number of iterations completed. */
iterations: number;
/** All tool calls up to this point. */
toolCalls: AgentToolCallRecord[];
/** All task calls up to this point. */
taskCalls: AgentTaskCallRecord[];
/** Current max output tokens setting (may have been escalated). */
maxOutputTokens: number;
/** Compaction state counters. */
compaction: {
autocompactFailures: number;
reactiveCompactRetries: number;
tokenEscalations: number;
};
/** Details of the pending approval, if any. */
pendingApproval?: {
kind: "tool" | "task";
tool: string;
args: unknown;
reason: string;
} | undefined;
}Three compaction strategies for managing context window usage.
Drop middle messages, keeping the system prompt and a tail window.
interface SnipConfig {
/** Number of recent messages to preserve. */
preserveTail: number;
}Truncate individual tool results that exceed a character limit.
interface MicrocompactConfig {
/** Max characters per tool result before truncation. */
maxToolResultChars: number;
/** Number of recent messages whose tool results are protected. */
protectedTail: number;
}LLM-driven summarization when context usage exceeds a threshold.
interface AutocompactConfig {
/** Context usage ratio (0.0 - 1.0) that triggers compaction. */
threshold: number;
/** Max consecutive compaction failures before giving up. */
maxFailures: number;
/** Extra token headroom to maintain after compaction. */
bufferTokens?: number | undefined;
}interface StreamingExecutorConfig {
/** Max concurrent tool executions per iteration. */
maxConcurrency: number;
}interface ToolCatalogConfig {
/** When tool count exceeds this, defer non-essential tools. */
deferThreshold: number;
}Snapshot provided to the afterIteration hook.
interface IterationSnapshot {
iteration: number;
messages: LLMMessage[];
toolCalls: AgentToolCallRecord[];
estimatedTokens: number;
}type ModelFinishReason =
| "stop"
| "tool_use"
| "max_output_tokens"
| "context_limit"
| "error";Internal type used by the streaming executor and engine.
interface ToolRequest {
id: string;
name: string;
args: Record<string, unknown>;
order: number;
}Post-response evaluators that can force re-prompting.
interface StopHook {
name: string;
evaluate(context: StopHookContext): Promise<StopHookResult>;
}
interface StopHookContext {
response: string;
messages: LLMMessage[];
toolCalls: AgentToolCallRecord[];
goal: string;
}
interface StopHookResult {
/** true = response accepted, false = inject feedback and re-prompt. */
pass: boolean;
feedback?: string | undefined;
}Layered system prompt composition.
interface PromptComposerConfig {
/** Base system prompt text. */
base?: string | undefined;
/** Static prompt layers. */
layers?: PromptLayer[] | undefined;
/** Dynamic layers computed at each iteration. */
dynamicLayers?: ((context: PromptContext) => PromptLayer[]) | undefined;
}
interface PromptLayer {
/** Unique layer identifier. */
id: string;
/** Layer content text. */
content: string;
/** Where to place this layer relative to the base prompt. */
position: "prepend" | "append" | "replace_base";
/** Higher priority layers are placed first within their position group. */
priority?: number | undefined;
}
interface PromptContext {
tools: AgentTool[];
iteration: number;
memories: string[];
tokenBudget: number;
}Pluggable backend interface for memory storage. Implement for custom backends (Redis, Mem0, etc.).
interface MemoryBackend {
store(entry: Omit<MemoryEntry, "id" | "createdAt">): Promise<string>;
query(scope: MemoryScope, text: string, k: number): Promise<MemoryEntry[]>;
remove(id: string): Promise<boolean>;
clear(scope: MemoryScope): Promise<void>;
}interface MemoryEntry {
id: string;
content: string;
scope: MemoryScope;
embedding?: number[] | undefined;
createdAt: string;
importance?: string | undefined;
type?: string | undefined;
accessCount?: number | undefined;
metadata?: Record<string, unknown> | undefined;
status?: "active" | "superseded" | undefined;
supersededBy?: string | undefined;
}interface MemoryScope {
type: string;
id: string;
}Developer-facing memory interface returned by createMemoryAccessor().
interface MemoryAccessor {
/** Store a memory. Returns the memory ID. */
remember(content: string, opts?: RememberOptions): Promise<string>;
/** Retrieve relevant memories via hybrid search. */
recall(query: string, opts?: RecallOptions): Promise<MemoryEntry[]>;
/** Delete a memory by ID. */
forget(entryId: string): Promise<boolean>;
/** Return a new MemoryAccessor scoped to a specific entity. */
about(type: string, id: string): MemoryAccessor;
/** Build an LLM-ready context string from stored memories within a token budget. */
assembleContext(opts: AssembleContextOptions): Promise<string>;
}
interface RememberOptions {
scope?: MemoryScope | undefined;
importance?: string | undefined;
type?: string | undefined;
metadata?: Record<string, unknown> | undefined;
}
interface RecallOptions {
scope?: MemoryScope | undefined;
limit?: number | undefined;
}
interface AssembleContextOptions {
query: string;
maxTokens?: number | undefined;
scopes?: MemoryScope[] | undefined;
}interface MemoryEmbedder {
embed(texts: string[]): Promise<number[][]>;
dimensions: number;
}Memory configuration for the smart agent.
interface SmartAgentMemoryConfig {
/** Memory storage backend (required). */
store: MemoryBackend;
/** Default scope for memory operations. */
scope: MemoryScope;
/** Additional scopes to read from during context assembly. */
readScopes?: MemoryScope[] | undefined;
/** Embedding model for vector search. */
embedding?: MemoryEmbedder | undefined;
/** Max tokens for assembled memory context. */
maxMemoryTokens?: number | undefined;
/** Save a session summary after run completion. */
saveSessionSummary?: boolean | undefined;
/** Memory reconciler — "llm" uses the agent's LLM provider, or pass a custom MemoryReconciler. */
reconciler?: "llm" | MemoryReconciler | undefined;
}LLM-driven memory lifecycle manager. When a new fact is stored, the reconciler sends ALL active memories in scope to the LLM and lets it decide which existing memories to keep, supersede, revise, or remove.
interface MemoryReconciler {
reconcile(
newContent: string,
existingMemories: MemoryEntry[],
): Promise<ReconcileResult>;
}
type MemoryOperationAction = "keep" | "supersede" | "revise" | "remove";
interface MemoryOperation {
id: string;
action: MemoryOperationAction;
reason: string;
revised?: string | undefined;
context?: string | undefined;
}
interface ReconcileResult {
operations: MemoryOperation[];
newMemories: string[];
}Built-in reconciler that uses the agent's LLM provider. Sends all active (non-superseded) memories plus the new fact to the model and parses the structured response.
class LlmMemoryReconciler implements MemoryReconciler {
constructor(llm: LLMProvider);
}Usage:
const agent = createSmartAgent({
llm: myProvider,
tools: [...],
memory: {
store: new BuiltinMemoryBackend(),
scope: { type: "agent", id: "my-agent" },
reconciler: "llm", // shorthand — uses the agent's LLM
},
});Reconcile a new fact against existing memories and store the result. Queries all active memories in scope, lets the reconciler judge relationships, applies operations, then stores the new fact and any derived memories.
function reconcileAndStore(
backend: MemoryBackend,
scope: MemoryScope,
newContent: string,
reconciler: MemoryReconciler,
): Promise<{ storedId: string; operations: MemoryOperation[] }>Default in-memory backend with optional vector search. Suitable for development and testing.
class BuiltinMemoryBackend implements MemoryBackend {
constructor(opts?: { embedding?: MemoryEmbedder });
}Features: keyword-only fallback when no embedder is provided, hybrid search (vector + keyword + recency decay) when embedder is present, auto-dedup at >0.92 cosine similarity.
class SqliteMemoryBackend implements MemoryBackend { ... }
function createSqliteMemoryStore(path: string): SqliteMemoryBackendfunction createMemoryAccessor(
backend: MemoryBackend,
scope: MemoryScope,
embedder?: MemoryEmbedder,
): MemoryAccessorUsage:
const customerMemory = createMemoryAccessor(backend, { type: "customer", id: "cust_123" });
await customerMemory.remember("Prefers email communication", { type: "preference" });
const relevant = await customerMemory.recall("communication preferences");Self-evolving agent primitives. The evolution engine records run experiences, distills strategies from patterns, and promotes high-utility strategies into skills.
interface EvolutionConfig {
/** Persistence backend for experiences, strategies, and skills (required). */
store: EvolutionStore;
/** When to record run experiences. */
capture?:
| "every-run"
| "on-failure"
| "on-success"
| ((result: AgentRunResult) => boolean)
| undefined;
/** When to distill strategies from experiences. */
distillation?: "post-run" | "manual" | undefined;
/** Custom distiller implementation (default: LlmDistiller). */
distiller?: Distiller | undefined;
/** Strategy pruning rules. */
pruning?: PruningConfig | undefined;
/** Auto-promote high-utility strategies into skills. */
skillPromotion?: SkillPromotionConfig | undefined;
}Persistence interface for experiences, strategies, and evolved skills.
interface EvolutionStore {
recordExperience(
exp: Omit<Experience, "id" | "recordedAt">,
): Promise<string>;
queryExperiences(query: ExperienceQuery): Promise<Experience[]>;
storeStrategy(
strategy: Omit<Strategy, "id" | "createdAt" | "updatedAt">,
): Promise<string>;
queryStrategies(query: string, k: number): Promise<Strategy[]>;
updateStrategyUtility(id: string, delta: number): Promise<void>;
incrementStrategyApplications(id: string): Promise<void>;
storeSkill(skill: AgentSkill): Promise<string>;
querySkills(query: string, k: number): Promise<AgentSkill[]>;
pruneStrategies(config: PruningConfig): Promise<number>;
getStats(): Promise<EvolutionStats>;
}Two built-in store implementations:
import {
InMemoryEvolutionStore,
SqliteEvolutionStore,
createSqliteEvolutionStore,
} from "@zauso-ai/capstan-ai";
// In-memory (testing / ephemeral)
const memStore = new InMemoryEvolutionStore();
// SQLite (production persistence)
const sqliteStore = createSqliteEvolutionStore("./evolution.db");Structured run trajectory recorded by the evolution engine.
interface Experience {
id: string;
goal: string;
outcome: "success" | "failure" | "partial";
trajectory: TrajectoryStep[];
iterations: number;
tokenUsage: number;
duration: number; // Milliseconds
skillsUsed: string[];
recordedAt: string; // ISO date
metadata?: Record<string, unknown> | undefined;
}Distilled insight derived from multiple experiences.
interface Strategy {
id: string;
content: string; // Actionable strategy description
source: string[]; // Which experiences it was derived from
utility: number; // Effectiveness score (higher = better)
applications: number; // Times this strategy has been applied
createdAt: string;
updatedAt: string;
}interface TrajectoryStep {
tool: string;
args: Record<string, unknown>;
result: unknown;
status: "success" | "error";
iteration: number;
}interface ExperienceQuery {
goal?: string | undefined;
outcome?: "success" | "failure" | "partial" | undefined;
limit?: number | undefined;
since?: string | undefined; // ISO date string
}interface PruningConfig {
maxStrategies?: number | undefined; // Cap on total strategies
minUtility?: number | undefined; // Prune below this utility score
maxAgeDays?: number | undefined; // Prune strategies older than this
}interface SkillPromotionConfig {
enabled?: boolean | undefined; // Enable auto-promotion
minApplications?: number | undefined; // Min times applied before promotion (default: 5)
minUtility?: number | undefined; // Min utility score for promotion (default: 0.7)
}interface EvolutionStats {
totalExperiences: number;
totalStrategies: number;
totalEvolvedSkills: number;
averageUtility: number;
}The Distiller interface abstracts strategy extraction from experiences.
interface Distiller {
/** Extract generalizable strategies from a set of experiences. */
distill(
experiences: Experience[],
): Promise<Omit<Strategy, "id" | "createdAt" | "updatedAt">[]>;
/** Merge overlapping/redundant strategies into a consolidated set (max 10). */
consolidate(
strategies: Strategy[],
): Promise<Omit<Strategy, "id" | "createdAt" | "updatedAt">[]>;
}LlmDistiller is the built-in implementation that uses an LLM to analyze execution traces and produce strategies.
import { LlmDistiller } from "@zauso-ai/capstan-ai";
const distiller = new LlmDistiller(myLlmProvider);
const strategies = await distiller.distill(experiences);
const consolidated = await distiller.consolidate(existingStrategies);/** Build a structured experience from a run result. */
function buildExperience(
goal: string,
result: AgentRunResult,
startTime: number,
skillsUsed: string[],
): Omit<Experience, "id" | "recordedAt">
/** Decide whether to capture based on EvolutionConfig.capture. */
function shouldCapture(
config: EvolutionConfig,
result: AgentRunResult,
): boolean
/**
* Full post-run pipeline: record experience, update strategy utilities,
* distill new strategies, prune, and promote to skills.
* Fire-and-forget safe -- evolution failures never crash the agent.
*/
async function runPostRunEvolution(
config: EvolutionConfig,
llm: LLMProvider,
goal: string,
result: AgentRunResult,
startTime: number,
skillsUsed: string[],
retrievedStrategies: Strategy[],
): Promise<void>
/** Create a PromptLayer from learned strategies for injection into the system prompt.
* Returns null when no strategies are provided. */
function buildStrategyLayer(strategies: Strategy[]): PromptLayer | nullUsage:
import {
createSmartAgent,
createSqliteEvolutionStore,
} from "@zauso-ai/capstan-ai";
const agent = createSmartAgent({
llm: myProvider,
tools: [readFile, writeFile, searchCode],
evolution: {
store: createSqliteEvolutionStore("./agent-evolution.db"),
capture: "every-run",
distillation: "post-run",
pruning: { maxStrategies: 50, minUtility: 0.2, maxAgeDays: 90 },
skillPromotion: { enabled: true, minApplications: 5, minUtility: 0.7 },
},
});Lightweight JSON Schema validator for tool input arguments. Checks required fields, types (string, number, integer, boolean, array, object), and enum constraints. Collects ALL errors rather than failing on the first.
function validateArgs(
args: Record<string, unknown>,
schema: Record<string, unknown> | undefined,
): { valid: boolean; error?: string }Returns { valid: true } when schema is undefined (no validation). Extra fields not in the schema are permissively ignored.
Usage:
import { validateArgs } from "@zauso-ai/capstan-ai";
const result = validateArgs(
{ path: "/tmp/file.txt", mode: "read" },
{
type: "object",
required: ["path"],
properties: {
path: { type: "string" },
mode: { type: "string", enum: ["read", "write"] },
},
},
);
// result.valid === trueNormalize an LLMMessage[] array before sending to the LLM API.
function normalizeMessages(messages: LLMMessage[]): LLMMessage[]Invariants enforced:
- No consecutive messages with the same role (merged)
- Empty-content messages filtered out
- System messages after the first are converted to user messages (most APIs only allow one system message at the start)
Rough token estimate: sums all message content lengths and divides by 4.
function estimateTokens(messages: LLMMessage[]): numberCompute memory age in days from a Unix timestamp in milliseconds.
function memoryAgeDays(timestampMs: number): numberHuman-readable age string: "today", "yesterday", or "N days ago".
function memoryAge(timestampMs: number): stringStaleness caveat text for the LLM. Returns an empty string for memories one day old or less. For older memories, returns a warning to verify claims against current code before asserting as fact.
function memoryFreshnessText(timestampMs: number): stringStandalone AI primitives. No agent loop -- single LLM call.
Structured reasoning: sends a prompt to the LLM and optionally parses the response against a schema.
function think<T = string>(
llm: LLMProvider,
prompt: string,
opts?: ThinkOptions<T>,
): Promise<T>
interface ThinkOptions<T = unknown> {
schema?: { parse: (data: unknown) => T } | undefined;
model?: string | undefined;
temperature?: number | undefined;
maxTokens?: number | undefined;
systemPrompt?: string | undefined;
}When schema is provided, the LLM is asked for JSON output and the result is parsed and validated. Without a schema, the raw text is returned.
Text generation: sends a prompt and returns the raw text response.
function generate(
llm: LLMProvider,
prompt: string,
opts?: GenerateOptions,
): Promise<string>
interface GenerateOptions {
model?: string | undefined;
temperature?: number | undefined;
maxTokens?: number | undefined;
systemPrompt?: string | undefined;
}Streaming text generation. Requires the LLM provider to support stream(). Yields text chunks as tokens are generated.
function thinkStream(
llm: LLMProvider,
prompt: string,
opts?: GenerateOptions,
): AsyncIterable<string>Alias for thinkStream.
function generateStream(
llm: LLMProvider,
prompt: string,
opts?: GenerateOptions,
): AsyncIterable<string>Durable harness runtime for long-running agents. Adds browser/filesystem sandboxes, verification hooks, persisted runs/events/artifacts/checkpoints, and runtime lifecycle control on top of the smart agent loop. See docs/harness.md for full documentation.
function createHarness(config: HarnessConfig): Promise<Harness>
interface Harness {
startRun(config: AgentRunConfig): Promise<HarnessRunHandle>;
run(config: AgentRunConfig): Promise<HarnessRunResult>;
pauseRun(runId: string): Promise<HarnessRunRecord>;
cancelRun(runId: string): Promise<HarnessRunRecord>;
resumeRun(runId: string, options?: HarnessResumeOptions): Promise<HarnessRunResult>;
getRun(runId: string): Promise<HarnessRunRecord | undefined>;
listRuns(): Promise<HarnessRunRecord[]>;
getEvents(runId?: string): Promise<HarnessRunEventRecord[]>;
getTasks(runId: string): Promise<HarnessTaskRecord[]>;
getArtifacts(runId: string): Promise<HarnessArtifactRecord[]>;
getCheckpoint(runId: string): Promise<AgentCheckpoint | undefined>;
getSessionMemory(runId: string): Promise<HarnessSessionMemoryRecord | undefined>;
getLatestSummary(runId: string): Promise<HarnessSummaryRecord | undefined>;
listSummaries(runId?: string): Promise<HarnessSummaryRecord[]>;
rememberMemory(input: HarnessMemoryInput): Promise<HarnessMemoryRecord>;
recallMemory(query: HarnessMemoryQuery): Promise<HarnessMemoryMatch[]>;
assembleContext(runId: string, options?: HarnessContextAssembleOptions): Promise<HarnessContextPackage>;
replayRun(runId: string): Promise<HarnessReplayReport>;
getPaths(): HarnessRuntimePaths;
destroy(): Promise<void>;
}Usage:
import { createHarness } from "@zauso-ai/capstan-ai";
const harness = await createHarness({
llm: openaiProvider({ apiKey: process.env.OPENAI_API_KEY! }),
sandbox: {
browser: { engine: "camoufox", platform: "jd", accountId: "price-monitor-01" },
fs: { rootDir: "./workspace" },
},
runtime: { rootDir: process.cwd(), maxConcurrentRuns: 2 },
verify: { enabled: true },
});
const started = await harness.startRun({ goal: "Research and save notes" });
const result = await started.result;
await harness.destroy();HarnessConfig:
interface HarnessConfig {
llm: LLMProvider;
sandbox?: {
browser?: boolean | BrowserSandboxConfig;
fs?: boolean | FsSandboxConfig;
};
verify?: {
enabled?: boolean;
maxRetries?: number;
verifier?: HarnessVerifierFn;
};
observe?: {
logger?: HarnessLogger;
onEvent?: (event: HarnessEvent) => void;
};
context?: {
enabled?: boolean;
maxPromptTokens?: number;
reserveOutputTokens?: number;
maxMemories?: number;
maxArtifacts?: number;
maxRecentMessages?: number;
maxRecentToolResults?: number;
microcompactToolResultChars?: number;
sessionCompactThreshold?: number;
defaultScopes?: MemoryScope[];
autoPromoteObservations?: boolean;
autoPromoteSummaries?: boolean;
};
runtime?: {
rootDir?: string;
maxConcurrentRuns?: number;
driver?: HarnessSandboxDriver;
beforeToolCall?: HarnessToolPolicyFn;
beforeTaskCall?: HarnessTaskPolicyFn;
};
}
interface BrowserSandboxConfig {
engine?: "playwright" | "camoufox";
platform?: string;
accountId?: string;
guardMode?: "vision" | "hybrid";
headless?: boolean;
proxy?: string;
viewport?: { width: number; height: number };
}
interface FsSandboxConfig {
rootDir: string;
allowWrite?: boolean;
allowDelete?: boolean;
maxFileSize?: number;
}The runtime store persists under .capstan/harness/: runs/, events/, tasks/, artifacts/, checkpoints/, session-memory/, summaries/, memory/.
Use openHarnessRuntime(rootDir?) for an independent control plane that can inspect paused/completed runs without a live harness instance. Accepts an optional authorize callback for runtime supervision with auth.
Additional harness exports: PlaywrightEngine, FsSandboxImpl, LocalHarnessSandboxDriver, FileHarnessRuntimeStore, buildHarnessRuntimePaths, openHarnessRuntime, HarnessContextKernel, HarnessVerifier, HarnessObserver, GuardRegistry, analyzeScreenshot, runVisionLoop. See the harness types export for the complete type surface.
Factory function that creates a standalone AI instance with all capabilities. No Capstan framework required.
function createAI(config: AIConfig): AIContext
interface AIConfig {
llm: LLMProvider;
memory?: {
backend?: MemoryBackend;
embedding?: MemoryEmbedder;
autoExtract?: boolean;
};
defaultScope?: MemoryScope;
}
interface AIContext {
think<T = string>(prompt: string, opts?: ThinkOptions<T>): Promise<T>;
generate(prompt: string, opts?: GenerateOptions): Promise<string>;
thinkStream(prompt: string, opts?: Omit<ThinkOptions, "schema">): AsyncIterable<string>;
generateStream(prompt: string, opts?: GenerateOptions): AsyncIterable<string>;
remember(content: string, opts?: RememberOptions): Promise<string>;
recall(query: string, opts?: RecallOptions): Promise<MemoryEntry[]>;
memory: {
about(type: string, id: string): MemoryAccessor;
forget(entryId: string): Promise<boolean>;
assembleContext(opts: AssembleContextOptions): Promise<string>;
};
agent: {
run(config: AgentRunConfig): Promise<AgentRunResult>;
};
}The core framework package. Provides the server, routing primitives, policy engine, approval workflow, caching, compliance, and application verifier.
Define a typed API route handler with input/output validation and agent introspection.
function defineAPI<TInput = unknown, TOutput = unknown>(
def: APIDefinition<TInput, TOutput>,
): APIDefinition<TInput, TOutput>
interface APIDefinition<TInput = unknown, TOutput = unknown> {
input?: z.ZodType<TInput>;
output?: z.ZodType<TOutput>;
description?: string;
capability?: "read" | "write" | "external";
resource?: string;
policy?: string;
handler: (args: { input: TInput; ctx: CapstanContext }) => Promise<TOutput>;
}Identity function that provides type-checking and editor auto-complete for the app configuration.
function defineConfig(config: CapstanConfig): CapstanConfig
interface CapstanConfig {
app?: { name?: string; title?: string; description?: string };
database?: { provider?: "sqlite" | "postgres" | "mysql"; url?: string };
auth?: {
providers?: Array<{ type: string; [key: string]: unknown }>;
session?: { strategy?: "jwt" | "database"; secret?: string; maxAge?: string };
};
agent?: {
manifest?: boolean; mcp?: boolean; openapi?: boolean;
rateLimit?: { default?: { requests: number; window: string }; perAgent?: boolean };
};
server?: { port?: number; host?: string };
}Define a middleware for the request pipeline.
function defineMiddleware(
def: MiddlewareDefinition | MiddlewareDefinition["handler"],
): MiddlewareDefinition
interface MiddlewareDefinition {
name?: string;
handler: (args: {
request: Request; ctx: CapstanContext; next: () => Promise<Response>;
}) => Promise<Response>;
}Define a named permission policy.
function definePolicy(def: PolicyDefinition): PolicyDefinition
interface PolicyDefinition {
key: string;
title: string;
effect: "allow" | "deny" | "approve" | "redact";
check: (args: { ctx: CapstanContext; input?: unknown }) => Promise<PolicyCheckResult>;
}Run all provided policies and return the most restrictive result. Severity order: allow < redact < approve < deny.
function enforcePolicies(
policies: PolicyDefinition[], ctx: CapstanContext, input?: unknown,
): Promise<PolicyCheckResult>function defineRateLimit(config: RateLimitConfig): RateLimitConfig
interface RateLimitConfig {
default: { requests: number; window: string };
perAuthType?: {
anonymous?: { requests: number; window: string };
human?: { requests: number; window: string };
agent?: { requests: number; window: string };
};
}Build a fully-wired Capstan application backed by a Hono server.
function createCapstanApp(config: CapstanConfig): CapstanApp
interface CapstanApp {
app: Hono;
routeRegistry: RouteMetadata[];
registerAPI: (method: HttpMethod, path: string, apiDef: APIDefinition, policies?: PolicyDefinition[]) => void;
}function createApproval(opts: {
method: string;
path: string;
input: unknown;
policy: string;
reason: string;
}): PendingApproval
function getApproval(id: string): PendingApproval | undefined
function listApprovals(
status?: "pending" | "approved" | "denied",
): PendingApproval[]
function resolveApproval(
id: string,
decision: "approved" | "denied",
resolvedBy?: string,
): PendingApproval | undefined
function clearApprovals(): void
function mountApprovalRoutes(app: Hono, handlerRegistry: HandlerRegistry): void
interface PendingApproval {
id: string;
method: string;
path: string;
input: unknown;
policy: string;
reason: string;
status: "pending" | "approved" | "denied";
createdAt: string;
resolvedAt?: string;
resolvedBy?: string;
result?: unknown;
}function verifyCapstanApp(appRoot: string): Promise<VerifyReport>
interface VerifyReport {
status: "passed" | "failed";
appRoot: string;
timestamp: string;
steps: VerifyStep[];
repairChecklist: Array<{
index: number; step: string; message: string;
file?: string; line?: number; hint?: string;
fixCategory?: string; autoFixable?: boolean;
}>;
summary: { totalSteps: number; passedSteps: number; failedSteps: number; skippedSteps: number; errorCount: number; warningCount: number };
}function definePlugin(def: PluginDefinition): PluginDefinition
interface PluginDefinition {
name: string;
version?: string;
setup: (ctx: PluginSetupContext) => void;
}Pluggable key-value store interface used by approvals, rate limiting, and DPoP replay detection.
interface KeyValueStore<T> {
get(key: string): Promise<T | undefined>;
set(key: string, value: T, ttlMs?: number): Promise<void>;
delete(key: string): Promise<void>;
has(key: string): Promise<boolean>;
values(): Promise<T[]>;
clear(): Promise<void>;
}Implementations: MemoryStore<T> (in-memory), RedisStore<T> (Redis-backed, prefix-namespaced).
function setApprovalStore(store: KeyValueStore<PendingApproval>): void
function setRateLimitStore(store: KeyValueStore<RateLimitEntry>): void
function setDpopReplayStore(store: KeyValueStore<boolean>): void
function setAuditStore(store: KeyValueStore<AuditEntry>): void
function setCacheStore(store: KeyValueStore<CacheEntry<unknown>>): void
function setResponseCacheStore(store: KeyValueStore<ResponseCacheEntry>): voidfunction defineCompliance(config: ComplianceConfig): void
interface ComplianceConfig {
riskLevel: "minimal" | "limited" | "high" | "unacceptable";
auditLog?: boolean;
transparency?: { description?: string; provider?: string; contact?: string };
}
function recordAuditEntry(entry: { action: string; authType?: string; userId?: string; resource?: string; detail?: unknown }): void
function getAuditLog(filter?: { action?: string; authType?: string; since?: string }): AuditEntry[]
function clearAuditLog(): voidfunction defineWebSocket(path: string, handler: WebSocketHandler): WebSocketRoute
interface WebSocketHandler {
onOpen?: (ws: WebSocketClient) => void;
onMessage?: (ws: WebSocketClient, message: string | ArrayBuffer) => void;
onClose?: (ws: WebSocketClient, code: number, reason: string) => void;
onError?: (ws: WebSocketClient, error: Error) => void;
}
interface WebSocketClient {
send(data: string | ArrayBuffer): void;
close(code?: number, reason?: string): void;
readonly readyState: number;
}
class WebSocketRoom {
join(client: WebSocketClient): void;
leave(client: WebSocketClient): void;
broadcast(message: string, exclude?: WebSocketClient): void;
get size(): number;
close(): void;
}Usage:
import { defineWebSocket, WebSocketRoom } from "@zauso-ai/capstan-core";
const lobby = new WebSocketRoom();
export const ws = defineWebSocket("/ws/lobby", {
onOpen(ws) { lobby.join(ws); },
onMessage(ws, msg) { lobby.broadcast(String(msg), ws); },
onClose(ws) { lobby.leave(ws); },
});function cacheSet<T>(key: string, data: T, opts?: CacheOptions): Promise<void>
function cacheGet<T>(key: string): Promise<T | undefined>
function cacheInvalidateTag(tag: string): Promise<void>
function cached<T>(fn: () => Promise<T>, opts?: CacheOptions & { key?: string }): () => Promise<T>
interface CacheOptions {
ttl?: number; // Time-to-live in seconds
tags?: string[]; // Cache tags for bulk invalidation
revalidate?: number; // Revalidate interval in seconds (ISR)
}Response cache (used by ISR render strategies):
function responseCacheGet(key: string): Promise<{ entry: ResponseCacheEntry; stale: boolean } | undefined>
function responseCacheSet(key: string, entry: ResponseCacheEntry, opts?: { ttlMs?: number }): Promise<void>
function responseCacheInvalidateTag(tag: string): Promise<number>
function responseCacheInvalidate(key: string): Promise<boolean>
function responseCacheClear(): Promise<void>
interface ResponseCacheEntry {
html: string;
headers: Record<string, string>;
statusCode: number;
createdAt: number;
revalidateAfter: number | null;
tags: string[];
}Cache utilities:
function cacheInvalidate(key: string): void
function cacheInvalidatePath(urlPath: string): void
function cacheClear(): void
function normalizeCacheTag(tag: string): string | undefined
function normalizeCacheTags(tags: string[]): string[]
function createPageCacheKey(urlPath: string): stringfunction csrfProtection(): MiddlewareHandler
function createRequestLogger(): MiddlewareHandlerfunction createCapstanOpsContext(config?: {
enabled?: boolean;
appName?: string;
source?: string;
recentWindowMs?: number;
retentionLimit?: number;
sink?: {
recordEvent(event: CapstanOpsEvent): Promise<void> | void;
close?(): Promise<void> | void;
};
}): CapstanOpsContext | undefinedWhen present on CapstanContext, the ops context records request, capability, policy, approval, and health lifecycle events.
function env(key: string): string
function clearAPIRegistry(): void
function getAPIRegistry(): ReadonlyArray<APIDefinition>
function createContext(honoCtx: HonoContext): CapstanContext
function renderRuntimeVerifyText(report: VerifyReport): stringtype HttpMethod = "GET" | "POST" | "PUT" | "DELETE" | "PATCH";
interface CapstanAuthContext {
isAuthenticated: boolean;
type: "human" | "agent" | "anonymous";
userId?: string;
role?: string;
email?: string;
agentId?: string;
agentName?: string;
permissions?: string[];
}
interface CapstanContext {
auth: CapstanAuthContext;
request: Request;
env: Record<string, string | undefined>;
honoCtx: HonoContext;
}
interface RouteMetadata {
method: HttpMethod;
path: string;
description?: string;
capability?: "read" | "write" | "external";
resource?: string;
policy?: string;
inputSchema?: Record<string, unknown>;
outputSchema?: Record<string, unknown>;
}Semantic operations kernel used by the runtime and CLI.
function createCapstanOpsRuntime(options: {
store: OpsStore;
serviceName?: string;
environment?: string;
}): {
recordEvent(input: OpsRecordEventInput): Promise<OpsEventRecord>;
recordIncident(input: OpsRecordIncidentInput): Promise<OpsIncidentRecord>;
captureSnapshot(input: OpsCaptureSnapshotInput): Promise<OpsSnapshotRecord>;
captureDerivedSnapshot(timestamp?: string): Promise<OpsSnapshotRecord>;
createOverview(): OpsOverview;
}class InMemoryOpsStore implements OpsStore {
constructor(options?: { retention?: OpsRetentionConfig });
}
class SqliteOpsStore implements OpsStore {
constructor(options: { path: string; retention?: OpsRetentionConfig });
}function createOpsQuery(store: OpsStore): {
events(filter?: OpsEventFilter): OpsEventRecord[];
incidents(filter?: OpsIncidentFilter): OpsIncidentRecord[];
snapshots(filter?: OpsSnapshotFilter): OpsSnapshotRecord[];
}
function createOpsQueryIndex(store: OpsStore): OpsQueryIndex
function createOpsOverview(query: ..., index: OpsQueryIndex): OpsOverview
function deriveOpsHealthStatus(store: OpsStore, options?: { windowMs?: number }): {
status: "healthy" | "degraded" | "unhealthy";
summary: string;
signals: OpsHealthSignal[];
}type OpsSeverity = "debug" | "info" | "warning" | "error" | "critical";
type OpsIncidentStatus = "open" | "acknowledged" | "suppressed" | "resolved";
type OpsTarget = "runtime" | "release" | "approval" | "policy" | "capability" | "cron" | "ops" | "cli";
interface OpsStore {
addEvent(record: OpsEventRecord): OpsEventRecord;
getEvent(id: string): OpsEventRecord | undefined;
listEvents(filter?: OpsEventFilter): OpsEventRecord[];
addIncident(record: OpsIncidentRecord): OpsIncidentRecord;
getIncident(id: string): OpsIncidentRecord | undefined;
listIncidents(filter?: OpsIncidentFilter): OpsIncidentRecord[];
addSnapshot(record: OpsSnapshotRecord): OpsSnapshotRecord;
listSnapshots(filter?: OpsSnapshotFilter): OpsSnapshotRecord[];
compact(options?: OpsCompactionOptions): OpsCompactionResult;
close(): void | Promise<void>;
}File-based routing: directory scanning, URL matching, and manifest generation.
function scanRoutes(routesDir: string): Promise<RouteManifest>
interface RouteManifest {
routes: RouteEntry[];
scannedAt: string;
rootDir: string;
}
interface RouteEntry {
filePath: string;
type: RouteType;
urlPattern: string;
methods?: string[];
layouts: string[];
middlewares: string[];
params: string[];
isCatchAll: boolean;
}
type RouteType = "page" | "api" | "layout" | "middleware" | "loading" | "error" | "not-found";function matchRoute(manifest: RouteManifest, method: string, urlPath: string): MatchedRoute | null
interface MatchedRoute {
route: RouteEntry;
params: Record<string, string>;
}Priority: static segments > dynamic segments > catch-all.
Extract API route information for the agent surface layer.
function generateRouteManifest(manifest: RouteManifest): { apiRoutes: AgentApiRoute[] }
interface AgentApiRoute {
method: string;
path: string;
filePath: string;
}Canonicalize and validate route entries -- detect conflicts, sort by specificity, generate diagnostics.
function canonicalizeRouteManifest(
routes: RouteEntry[],
rootDir: string,
): CanonicalizedRouteManifest
interface CanonicalizedRouteManifest {
routes: RouteEntry[];
diagnostics: RouteDiagnostic[];
}
interface RouteDiagnostic {
code: RouteConflictReason;
severity: "error" | "warning";
message: string;
routeType: RouteType;
urlPattern: string;
canonicalPattern: string;
filePaths: string[];
directoryDepth?: number;
}function validateRouteManifest(routes: RouteEntry[], rootDir: string): CanonicalizedRouteManifest
function createRouteScanCache(): RouteScanCache
function createRouteConflictError(diagnostics: RouteDiagnostic[]): RouteConflictError
class RouteConflictError extends Error {
code: "ROUTE_CONFLICT";
conflicts: RouteConflict[];
diagnostics: RouteDiagnostic[];
}
interface RouteStaticInfo {
exportNames: string[];
hasMetadata?: boolean;
renderMode?: "ssr" | "ssg" | "isr" | "streaming";
revalidate?: number;
hasGenerateStaticParams?: boolean;
}Database layer with model definitions, schema generation, migrations, and CRUD route scaffolding.
function defineModel(name: string, config: {
fields: Record<string, FieldDefinition>;
relations?: Record<string, RelationDefinition>;
indexes?: IndexDefinition[];
}): ModelDefinitionconst field: {
id(): FieldDefinition;
string(opts?: FieldOptions): FieldDefinition;
text(opts?: FieldOptions): FieldDefinition;
integer(opts?: FieldOptions): FieldDefinition;
number(opts?: FieldOptions): FieldDefinition;
boolean(opts?: FieldOptions): FieldDefinition;
date(opts?: FieldOptions): FieldDefinition;
datetime(opts?: FieldOptions): FieldDefinition;
json<T = unknown>(opts?: FieldOptions): FieldDefinition;
enum(values: readonly string[], opts?: FieldOptions): FieldDefinition;
vector(dimensions: number): FieldDefinition;
}
const relation: {
belongsTo(model: string, opts?: { foreignKey?: string; through?: string }): RelationDefinition;
hasMany(model: string, opts?: { foreignKey?: string; through?: string }): RelationDefinition;
hasOne(model: string, opts?: { foreignKey?: string; through?: string }): RelationDefinition;
manyToMany(model: string, opts?: { foreignKey?: string; through?: string }): RelationDefinition;
}function createDatabase(config: { provider: "sqlite" | "postgres" | "mysql"; url: string }): Promise<DatabaseInstance>
function generateMigration(fromModels: ModelDefinition[], toModels: ModelDefinition[]): string[]
function applyMigration(db: { $client: { exec: (sql: string) => void } }, sql: string[]): void
function ensureTrackingTable(client: MigrationDbClient, provider?: DbProvider): void
function getAppliedMigrations(client: MigrationDbClient): string[]
function getMigrationStatus(client: MigrationDbClient, allMigrationNames: string[], provider?: DbProvider): MigrationStatus
function applyTrackedMigrations(client: MigrationDbClient, migrations: Array<{ name: string; sql: string }>, provider?: DbProvider): string[]
function generateCrudRoutes(model: ModelDefinition): CrudRouteFiles[]
function generateDrizzleSchema(models: ModelDefinition[], provider: DbProvider): Record<string, DrizzleTable>function defineEmbedding(modelName: string, config: { dimensions: number; adapter: EmbeddingAdapter }): EmbeddingInstance
function openaiEmbeddings(opts: { apiKey: string; model?: string; baseUrl?: string }): EmbeddingAdapter
function cosineDistance(a: number[], b: number[]): number
function findNearest(items: { id: string; vector: number[] }[], query: number[], k?: number): { id: string; score: number }[]
function hybridSearch(items: { id: string; vector: number[]; text: string }[], query: { vector: number[]; text: string }, k?: number): { id: string; score: number }[]function prepareCreateData(model: ModelDefinition, input: Record<string, unknown>): Record<string, unknown>
function prepareUpdateData(model: ModelDefinition, input: Record<string, unknown>): Record<string, unknown>function createDatabaseRuntime(db: DrizzleClient, schema: Record<string, DrizzleTable>): DatabaseRuntime
function createCrudRepository(db: DrizzleClient, model: ModelDefinition, table: DrizzleTable): CrudRepository
function createCrudRuntime(db: DrizzleClient, models: ModelDefinition[], schema: Record<string, DrizzleTable>): CrudRuntimetype ScalarType = "string" | "integer" | "number" | "boolean" | "date" | "datetime" | "text" | "json";
type DbProvider = "sqlite" | "postgres" | "mysql";
type RelationKind = "belongsTo" | "hasMany" | "hasOne" | "manyToMany";
interface FieldDefinition {
type: ScalarType;
required?: boolean;
unique?: boolean;
default?: unknown;
min?: number;
max?: number;
enum?: readonly string[];
updatedAt?: boolean;
autoId?: boolean;
references?: string;
}
interface RelationDefinition {
kind: RelationKind;
model: string;
foreignKey?: string;
through?: string;
}
interface IndexDefinition {
fields: string[];
unique?: boolean;
order?: "asc" | "desc";
}
interface ModelDefinition {
name: string;
fields: Record<string, FieldDefinition>;
relations: Record<string, RelationDefinition>;
indexes: IndexDefinition[];
}Authentication and authorization: JWT sessions, API keys, OAuth, DPoP, SPIFFE/mTLS, grants.
function signSession(payload: Omit<SessionPayload, "iat" | "exp">, secret: string, maxAge?: string): string
function verifySession(token: string, secret: string): SessionPayload | null
interface SessionPayload {
userId: string; email?: string; role?: string; iat: number; exp: number;
}function generateApiKey(prefix?: string): { key: string; hash: string; prefix: string }
function verifyApiKey(key: string, storedHash: string): Promise<boolean>
function extractApiKeyPrefix(key: string): stringfunction createAuthMiddleware(config: AuthConfig, deps: AuthResolverDeps): (request: Request) => Promise<AuthContext>
interface AuthConfig {
session: { secret: string; maxAge?: string };
apiKeys?: { prefix?: string; headerName?: string };
}function checkPermission(required: { resource: string; action: "read" | "write" | "delete" }, granted: string[]): boolean
function derivePermission(capability: "read" | "write" | "external", resource?: string): { resource: string; action: string }function googleProvider(opts: { clientId: string; clientSecret: string }): OAuthProvider
function githubProvider(opts: { clientId: string; clientSecret: string }): OAuthProvider
function createOAuthHandlers(config: OAuthConfig, fetchFn?: typeof fetch): OAuthHandlers
interface OAuthHandlers {
login: (request: Request, providerName: string) => Response;
callback: (request: Request) => Promise<Response>;
}function authorizeGrant(required: AuthGrant, granted: AuthGrant[]): AuthorizationDecision
function checkGrant(required: AuthGrant, granted: AuthGrant[]): boolean
function normalizePermissionsToGrants(permissions: (string | AuthGrant)[]): AuthGrant[]
function createGrant(resource: string, action: string, scope?: Record<string, string>): AuthGrant
interface AuthGrant { resource: string; action: string; scope?: Record<string, string> }Runtime grant helpers:
function grantRunActions(actions?: string[], runId?: string): AuthGrant[]
function grantRunCollectionActions(actions?: string[]): AuthGrant[]
function grantApprovalActions(actions?: string[], approvalId?: string): AuthGrant[]
function grantApprovalCollectionActions(actions?: string[]): AuthGrant[]
function grantEventActions(actions?: string[]): AuthGrant[]
function grantArtifactActions(actions?: string[]): AuthGrant[]
function grantCheckpointActions(actions?: string[]): AuthGrant[]
function grantTaskActions(actions?: string[]): AuthGrant[]
function grantSummaryActions(actions?: string[]): AuthGrant[]
function grantMemoryActions(actions?: string[]): AuthGrant[]
function grantContextActions(actions?: string[]): AuthGrant[]
function grantRuntimePathsActions(actions?: string[]): AuthGrant[]Runtime authorizer:
function deriveRuntimeGrantRequirements(request: RuntimeActionRequest): AuthGrant[]
function authorizeRuntimeAction(request: RuntimeActionRequest, granted: AuthGrant[]): AuthorizationResult
function createRuntimeGrantAuthorizer(granted: AuthGrant[]): RuntimeGrantAuthorizer
function createHarnessGrantAuthorizer(granted: AuthGrant[]): HarnessGrantAuthorizerExecution identity:
function createExecutionIdentity(kind: string, source: string): ExecutionIdentity
function createRequestExecution(request: Request): ExecutionIdentity
function createDelegationLink(from: Identity, to: Identity): DelegationLinkfunction validateDpopProof(proof: string, options: DpopValidationOptions): Promise<DpopResult>
function extractWorkloadIdentity(certOrClaim: string): WorkloadIdentity | null
function isValidSpiffeId(uri: string): booleanMulti-protocol adapter layer: CapabilityRegistry, MCP server, A2A handler, OpenAPI spec, LangChain tools.
class CapabilityRegistry {
constructor(config: AgentConfig);
register(route: RouteRegistryEntry): void;
registerAll(routes: RouteRegistryEntry[]): void;
getRoutes(): readonly RouteRegistryEntry[];
getConfig(): Readonly<AgentConfig>;
toManifest(): AgentManifest;
toOpenApi(): Record<string, unknown>;
toMcp(executeRoute: RouteExecutor): { server: McpServer; getToolDefinitions: () => ToolDef[] };
toA2A(executeRoute: RouteExecutor): { handleRequest: (body: unknown) => Promise<unknown>; getAgentCard: () => A2AAgentCard };
}Built-in LLM provider adapters for @zauso-ai/capstan-ai.
function openaiProvider(config: {
apiKey: string;
baseUrl?: string; // default: "https://api.openai.com/v1"
model?: string; // default: "gpt-4o"
}): LLMProvider
function anthropicProvider(config: {
apiKey: string;
model?: string; // default: "claude-sonnet-4-20250514"
baseUrl?: string; // default: "https://api.anthropic.com/v1"
}): LLMProvideropenaiProvider works with any OpenAI-compatible API (OpenAI, Azure OpenAI, Ollama, etc.) by setting baseUrl. Supports both chat() and stream().
function createMcpServer(
config: AgentConfig,
routes: RouteRegistryEntry[],
executeRoute: RouteExecutor,
): { server: McpServer; getToolDefinitions: () => ToolDef[] }
function serveMcpStdio(server: McpServer): Promise<void>
function routeToToolName(method: string, path: string): string
// GET /tickets -> get_tickets, GET /tickets/:id -> get_tickets_by_id
function createMcpClient(options: McpClientOptions): McpClient
interface McpClientOptions {
url?: string; // Streamable HTTP endpoint
command?: string; // stdio command (alternative to url)
args?: string[]; // stdio command args
transport?: "streamable-http" | "stdio";
}
interface McpClient {
listTools(): Promise<Array<{ name: string; description?: string; inputSchema: unknown }>>;
callTool(name: string, args?: unknown): Promise<unknown>;
close(): Promise<void>;
}
class McpTestHarness {
constructor(registry: CapabilityRegistry);
listTools(): ToolDef[];
callTool(name: string, args?: unknown): Promise<unknown>;
}function createA2AHandler(config: AgentConfig, routes: RouteRegistryEntry[], executeRoute: RouteExecutor): { handleRequest: (body: unknown) => Promise<A2AJsonRpcResponse>; getAgentCard: () => A2AAgentCard }
function generateA2AAgentCard(config: AgentConfig, routes: RouteRegistryEntry[]): A2AAgentCardfunction generateOpenApiSpec(config: AgentConfig, routes: RouteRegistryEntry[]): Record<string, unknown>function toLangChainTools(registry: CapabilityRegistry, options?: { filter?: (route: RouteRegistryEntry) => boolean }): StructuredTool[]interface AgentManifest {
capstan: string; name: string; description?: string; baseUrl?: string;
authentication: { schemes: Array<{ type: "bearer"; name: string; header: string; description: string }> };
resources: Array<{ key: string; title: string; description?: string; fields: Record<string, { type: string; required?: boolean; enum?: string[] }> }>;
capabilities: Array<{ key: string; title: string; description?: string; mode: "read" | "write" | "external"; resource?: string; endpoint: { method: string; path: string; inputSchema?: Record<string, unknown>; outputSchema?: Record<string, unknown> }; policy?: string }>;
mcp?: { endpoint: string; transport: string };
}
interface RouteRegistryEntry {
method: string; path: string; description?: string;
capability?: "read" | "write" | "external"; resource?: string; policy?: string;
inputSchema?: Record<string, unknown>; outputSchema?: Record<string, unknown>;
}
interface AgentConfig {
name: string; description?: string; baseUrl?: string;
resources?: Array<{ key: string; title: string; description?: string; fields: Record<string, { type: string; required?: boolean; enum?: string[] }> }>;
}React SSR with loaders, layouts, hydration, Image, Font, Metadata, and ErrorBoundary.
function renderPage(options: RenderPageOptions): Promise<RenderResult>
function renderPartialStream(options: RenderPageOptions): Promise<RenderStreamResult>function defineLoader(loader: LoaderFunction): LoaderFunction
function useLoaderData<T = unknown>(): T
type LoaderFunction = (args: { params: Record<string, string>; request: Request }) => Promise<unknown>;function Outlet(): JSX.Element
function OutletProvider(props: { children: React.ReactNode }): JSX.Element
function ServerOnly(props: { children: React.ReactNode }): JSX.Element | null
function ClientOnly(props: { children: React.ReactNode; fallback?: React.ReactNode }): JSX.Element
function serverOnly(): void
function useAuth(): CapstanAuthContext
function useParams(): Record<string, string>
function hydrateCapstanPage(): voidUsage:
import { ClientOnly, ServerOnly } from "@zauso-ai/capstan-react";
export default function Page() {
return (
<div>
<ServerOnly><AnalyticsTag /></ServerOnly>
<ClientOnly fallback={<p>Loading...</p>}>
<RichTextEditor />
</ClientOnly>
</div>
);
}function Image(props: ImageProps): ReactElement
interface ImageProps {
src: string; alt: string; width?: number; height?: number;
priority?: boolean; quality?: number; placeholder?: "blur" | "empty";
blurDataURL?: string; sizes?: string; loading?: "lazy" | "eager";
className?: string; style?: Record<string, string | number>;
}function defineFont(config: FontConfig): FontResult
interface FontConfig {
family: string; src?: string; weight?: string | number; style?: string;
display?: "auto" | "block" | "swap" | "fallback" | "optional";
preload?: boolean; subsets?: string[]; variable?: string;
}
interface FontResult { className: string; style: { fontFamily: string }; variable?: string }function defineMetadata(metadata: Metadata): Metadata
function generateMetadataElements(metadata: Metadata): ReactElement[]
function mergeMetadata(parent: Metadata, child: Metadata): Metadata
interface Metadata {
title?: string | { default: string; template?: string };
description?: string; keywords?: string[];
robots?: string | { index?: boolean; follow?: boolean };
openGraph?: { title?: string; description?: string; type?: string; url?: string; image?: string; siteName?: string };
twitter?: { card?: "summary" | "summary_large_image"; title?: string; description?: string; image?: string };
icons?: { icon?: string; apple?: string };
canonical?: string; alternates?: Record<string, string>;
}class ErrorBoundary extends Component<ErrorBoundaryProps> {}
interface ErrorBoundaryProps {
fallback: ReactElement | ((error: Error, reset: () => void) => ReactElement);
children?: ReactNode;
onError?: (error: Error, errorInfo: ErrorInfo) => void;
}
function NotFound(): ReactElementUsage:
import { ErrorBoundary } from "@zauso-ai/capstan-react";
<ErrorBoundary fallback={(error, reset) => (
<div>
<p>Something went wrong: {error.message}</p>
<button onClick={reset}>Try again</button>
</div>
)}>
<MyComponent />
</ErrorBoundary>type RenderMode = "ssr" | "ssg" | "isr" | "streaming"
class SSRStrategy implements RenderStrategy {}
class ISRStrategy implements RenderStrategy {}
class SSGStrategy implements RenderStrategy { constructor(staticDir?: string) }
function createStrategy(mode: RenderMode, opts?: { staticDir?: string }): RenderStrategyfunction Link(props: LinkProps): ReactElement
function useNavigate(): (url: string, opts?: { replace?: boolean; scroll?: boolean }) => void
function useRouterState(): RouterState
function bootstrapClient(): void
class CapstanRouter {
readonly state: RouterState;
navigate(url: string, opts?: NavigateOptions): Promise<void>;
prefetch(url: string): Promise<void>;
subscribe(listener: (state: RouterState) => void): () => void;
destroy(): void;
}
function NavigationProvider(props: { children: ReactNode; initialLoaderData?: unknown; initialParams?: Record<string, string> }): ReactElement
class NavigationCache {
constructor(maxSize?: number, ttlMs?: number); // defaults: 50, 5min
get(url: string): NavigationPayload | undefined;
set(url: string, payload: NavigationPayload): void;
has(url: string): boolean;
delete(url: string): boolean;
clear(): void;
readonly size: number;
}
class PrefetchManager {
observe(element: Element, strategy: PrefetchStrategy): void;
unobserve(element: Element): void;
destroy(): void;
}
function withViewTransition(fn: () => void | Promise<void>): Promise<void>
function getManifest(): ClientRouteManifest | null
function initRouter(manifest: ClientRouteManifest): CapstanRouter
type PrefetchStrategy = "none" | "hover" | "viewport";
interface RouterState { url: string; status: "idle" | "loading" | "error"; error?: Error }
interface NavigateOptions { replace?: boolean; state?: unknown; scroll?: boolean; noCache?: boolean }Prefetch strategies: "viewport" (IntersectionObserver, 200px margin), "hover" (80ms delay), "none".
Usage:
import { Link, useNavigate } from "@zauso-ai/capstan-react/client";
<Link href="/about">About</Link>
<Link href="/posts" prefetch="viewport">Posts</Link>Recurring job scheduler for Capstan AI workflows.
function defineCron(config: CronJobConfig): CronJobConfig
function createCronRunner(): CronRunner
function createBunCronRunner(): CronRunner
function createAgentCron(config: AgentCronConfig): CronJobConfig
interface CronJobConfig {
name: string;
pattern: string;
handler: () => Promise<void>;
timezone?: string;
maxConcurrent?: number;
onError?: (err: Error) => void;
enabled?: boolean;
}
interface CronRunner {
add(config: CronJobConfig): string;
remove(id: string): boolean;
start(): void;
stop(): void;
getJobs(): CronJobInfo[];
}
interface AgentCronConfig {
cron: string;
name: string;
goal: string | (() => string);
timezone?: string;
llm?: unknown;
harnessConfig?: Record<string, unknown>;
run?: {
about?: [string, string];
maxIterations?: number;
memory?: boolean;
systemPrompt?: string;
excludeRoutes?: string[];
};
triggerMetadata?: Record<string, unknown>;
runtime?: {
harness?: { startRun(config: unknown, options?: unknown): Promise<{ runId: string; result: Promise<unknown> }> };
createHarness?: () => Promise<{ startRun(config: unknown, options?: unknown): Promise<{ runId: string; result: Promise<unknown> }> }>;
reuseHarness?: boolean;
};
onQueued?: (meta: { runId: string; trigger: unknown }) => void;
onResult?: (result: unknown, meta: { runId: string; trigger: unknown }) => void;
onError?: (err: Error) => void;
}createCronRunner() approximates cron patterns as intervals -- suitable for */N minute/hour schedules. createBunCronRunner() uses Bun's native cron when available, falling back to the interval runner.
Usage:
import { createCronRunner, createAgentCron } from "@zauso-ai/capstan-cron";
const runner = createCronRunner();
runner.add(createAgentCron({
cron: "0 */2 * * *",
name: "price-monitor",
goal: "Check storefront and refresh report",
runtime: { harness },
}));
runner.start();Development server, Vite build pipeline, CSS processing, and deployment adapters.
function createDevServer(config: DevServerConfig): Promise<DevServerInstance>
function watchRoutes(routesDir: string, callback: () => void): void
function loadRouteModule(filePath: string): Promise<unknown>
function loadApiHandlers(filePath: string): Promise<Record<string, APIDefinition>>
function loadPageModule(filePath: string): Promise<PageModule>
function printStartupBanner(config: { port: number; routes: number }): voidfunction createViteConfig(config: CapstanViteConfig): Record<string, unknown>
function createViteDevMiddleware(config: CapstanViteConfig): Promise<{ middleware: unknown; close: () => Promise<void> } | null>
function buildClient(config: CapstanViteConfig): Promise<void>function buildCSS(entryFile: string, outFile: string, isDev?: boolean): Promise<void>
function detectCSSMode(rootDir: string): "tailwind" | "lightningcss" | "none"
function buildTailwind(entryFile: string, outFile: string): Promise<void>
function startTailwindWatch(entryFile: string, outFile: string): ChildProcessfunction buildStaticPages(options: BuildStaticOptions): Promise<BuildStaticResult>function createCloudflareHandler(app: { fetch: (req: Request) => Promise<Response> }): ExportedHandler
function createVercelHandler(app: { fetch: (req: Request) => Promise<Response> }): (req: Request) => Promise<Response>
function createVercelNodeHandler(app: { fetch: (req: Request) => Promise<Response> }): (req: IncomingMessage, res: ServerResponse) => Promise<void>
function createFlyAdapter(config?: FlyConfig): ServerAdapter
function createNodeAdapter(): ServerAdapter
function createBunAdapter(): ServerAdapter
function generateVercelConfig(): object
function generateFlyToml(config?: FlyConfig): string
function generateWranglerConfig(name: string): stringIn-process fetch client for page loaders (avoids HTTP round-trips).
function createPageFetch(request: Request, options?: PageFetchOptions): PageFetchClient
interface PageFetchClient {
get(path: string, init?: RequestInit): Promise<Response>;
post(path: string, body?: unknown, init?: RequestInit): Promise<Response>;
put(path: string, body?: unknown, init?: RequestInit): Promise<Response>;
delete(path: string, init?: RequestInit): Promise<Response>;
}function loadRouteMiddleware(filePath: string): Promise<MiddlewareHandler>
function loadRouteMiddlewares(filePaths: string[]): Promise<MiddlewareHandler[]>
function composeRouteMiddlewares(middlewares: MiddlewareHandler[], handler: RouteHandler): RouteHandler
function runRouteMiddlewares(filePaths: string[], args: RouteHandlerArgs, handler: RouteHandler): Promise<Response>function buildPortableRuntimeApp(config: PortableRuntimeConfig): RuntimeAppBuildfunction registerVirtualRouteModule(filePath: string, mod: unknown): void
function registerVirtualRouteModules(modules: Map<string, unknown>): void
function clearVirtualRouteModules(filePath?: string): voidCommand-line interface for development, building, deployment, verification, and operations.
capstan dev [--port <number>] [--host <string>] # Start dev server with live reload
capstan build [--static] [--target <target>] # Build for production
capstan start [--from <dir>] [--port <n>] # Start production serverBuild targets: node-standalone, docker, vercel-node, vercel-edge, cloudflare, fly.
capstan add model <name> # app/models/<name>.model.ts
capstan add api <name> # app/routes/<name>/index.api.ts
capstan add page <name> # app/routes/<name>/index.page.tsx
capstan add policy <name> # app/policies/index.ts (appends)capstan db:migrate --name <migration-name> # Generate migration SQL
capstan db:push # Apply pending migrations
capstan db:status # Show migration statuscapstan verify [<path>] [--json] [--deployment] [--target <target>]8-step cascade: structure, config, routes, models, typecheck, contracts, manifest, cross-protocol. Output includes repairChecklist with fixCategory and autoFixable for AI consumption.
capstan mcp # Start MCP server via stdio
capstan agent:manifest # Print agent manifest JSON
capstan agent:openapi # Print OpenAPI 3.1 spec JSONcapstan ops:events [<path>] [--kind <kind>] [--severity <s>] [--limit <n>] [--json]
capstan ops:incidents [<path>] [--status <status>] [--severity <s>] [--json]
capstan ops:health [<path>] [--json]
capstan ops:tail [<path>] [--limit <n>] [--follow] [--json]capstan harness:list # List persisted runs
capstan harness:get <runId> # Read one run record
capstan harness:events [<runId>] # Read runtime events
capstan harness:artifacts <runId> # List artifacts
capstan harness:checkpoint <runId> # Read loop checkpoint
capstan harness:approval <approvalId> # Read one approval
capstan harness:approvals [<runId>] # List approvals
capstan harness:approve <runId> [--note <t>] # Approve blocked run
capstan harness:deny <runId> [--note <t>] # Deny and cancel
capstan harness:pause <runId> # Request cooperative pause
capstan harness:cancel <runId> # Request cancellation
capstan harness:replay <runId> # Replay events and verify
capstan harness:paths # Print filesystem pathsAll harness commands accept --root <dir>, --grants <json>, --subject <json>, --json.
capstan deploy:init [--target <target>] [--force]Targets: docker, vercel-node, vercel-edge, cloudflare, fly.
Project scaffolder CLI.
npx create-capstan-app # Interactive mode
npx create-capstan-app my-app --template blank # Non-interactive
npx create-capstan-app my-app --template tickets --deploy docker| Template | Includes |
|---|---|
blank |
Health check API, home page, root layout, requireAuth policy, AGENTS.md |
tickets |
Everything in blank + Ticket model, CRUD routes, auth config, database config |
function scaffoldProject(config: {
projectName: string;
template: "blank" | "tickets";
outputDir: string;
}): Promise<void>
type DeployTarget = "none" | "docker" | "vercel-node" | "vercel-edge" | "cloudflare" | "fly"function packageJson(projectName: string, template?: Template): string
function tsconfig(): string
function capstanConfig(projectName: string, title: string, template?: Template): string
function rootLayout(title: string): string
function indexPage(title: string, projectName: string, template?: Template): string
function healthApi(): string
function policiesIndex(): string
function gitignore(): string
function dockerfile(): string
function envExample(): string
function mainCss(): string
function agentsMd(projectName: string, template: Template): stringTemplate-specific (tickets):
function ticketModel(): string
function ticketsIndexApi(): string
function ticketByIdApi(): stringDeployment config generators:
function flyDockerfile(): string
function flyToml(appName: string): string
function vercelConfig(target: "vercel-node" | "vercel-edge"): string
function wranglerConfig(appName: string): stringfunction runPrompts(): Promise<{ projectName: string; template: Template; deploy: DeployTarget }>
function detectPackageManagerRuntime(isBun?: boolean): PackageManagerRuntime
interface PackageManagerRuntime {
installCommand: string;
runCommand: string;
devCommand: string;
}