Unified Common Lisp interface for multiple LLM providers with tool calling support. Works with Anthropic, OpenAI, Gemini, Ollama, Openrouter and any OpenAI-compatible API. Extensive use of conditions and telos to give agents more options to recover from errors.
This Library is designed by me and implemented by Claude with my inputs. This is designed for consumption by Agents as well as Humans. If you have a problem with Agent written code then this library is not for you.
Please test and report issues. Feedback and comments welcome.
You want to use LLMs in your Common Lisp code, but you're tired of rewriting the same request/response handling for each provider's different API format.
cl-llm-provider solves this by:
- Single interface - One
completeandembeddingcall works across all providers (Anthropic, OpenAI, Gemini, Ollama, OpenRouter, Groq, etc.) - Provider-agnostic messages - Define conversations once, run them on any LLM
- Tool calling - Define tools once, they work across Anthropic, OpenAI, Ollama formats automatically
- Smart error recovery - Rate limits, auth failures, and API errors handled gracefully with Lisp restarts
- Accurate token counting - Track usage across all providers with consistent metrics
- Performance profiling - Optional timing breakdown (encode/API/decode) for optimization
- Configuration as Lisp - Not YAML. Set up providers in actual Lisp code with full power.
- Thread-safe - Safe for concurrent requests
1. Install & set API key:
# Via Quicklisp (when available)
sbcl --eval '(ql:quickload :cl-llm-provider)'
# Or clone and load locally
sbcl --eval '(asdf:load-system :cl-llm-provider)'
# Set your API key
export ANTHROPIC_API_KEY="sk-ant-..."2. Your first completion (3 lines):
(use-package :cl-llm-provider)
(let ((response (complete '((:role "user" :content "What is Lisp?")))))
(format t "~A~%" (response-content response)))Expected output:
Lisp is a functional programming language known for...
That's it. You now have LLM completions working. Ready to switch to OpenAI? Change :anthropic to :openai. Same code.
Chat with multiple turns:
(let ((messages (list (list :role "user" :content "What is 2+2?"))))
(let ((response (complete messages)))
(push (response-message response) messages)
(push (list :role "user" :content "Add 3 to that?") messages)
(complete (reverse messages))))Use tool calling:
(let* ((tools (list (define-tool "get_weather" "Get weather for a location"
'((:name "city" :type :string)))))
(response (complete '((:role "user" :content "What's the weather in Paris?"))
:tools tools)))
(when (response-tool-calls response)
;; Handle tool calls...
))Switch providers dynamically:
(complete messages :provider (make-provider :openai :model "gpt-4"))
;; Same code, different providerCheck provider capabilities:
;; Check if provider supports tools before using them
(let ((provider (make-provider :anthropic :model "claude-3-5-sonnet-20241022")))
(if (provider-supports-p provider :tools)
(complete messages :tools my-tools)
(complete messages)))
;; Get model context window and pricing
(let* ((provider (make-provider :openai))
(meta (model-metadata provider "gpt-4o")))
(format t "Context: ~D tokens~%" (getf meta :context-window))
(format t "Cost: $~,2F per 1M input tokens~%"
(getf meta :input-cost-per-1m-tokens)))I want to...
| Goal | Start Here |
|---|---|
| Get working in 5 minutes | Quick Start |
| Learn how to use this library | Tutorials - Progressive learning |
| Query provider capabilities | Metadata API Guide - Introspection and model metadata |
| Solve a specific problem | How-To Guides - Task-oriented |
| Understand the design | Explanation - Conceptual |
| Look up an API | Reference - Complete API |
| Upgrade from old code | Migration Guide |
Beginner (0 to first working code):
- Quick Start (5 min)
- Tutorial: Basics (15 min)
Building Features (using tools, error handling):
Mastering (performance, custom providers):
Testing & Quality:
docs/
βββ quickstart.md # Get started in 5 minutes
βββ metadata-api.md # Provider introspection and model metadata
βββ tutorials/ # Progressive learning
β βββ 01-basics.md # Messages and conversations
β βββ 02-tool-calling.md # Using tools with LLMs
β βββ 03-advanced.md # Profiling, embeddings, error recovery
βββ how-to/ # Task-oriented guides
β βββ tools.md # Advanced tool features
β βββ add-provider.md # Implement a new provider
β βββ error-handling.md # Error patterns and retry logic
β βββ testing.md # Testing tools and providers
βββ explanation/ # Conceptual understanding
β βββ architecture.md # How the system works
β βββ providers.md # Understanding each provider
βββ reference/ # API documentation
β βββ api.md # Complete API reference
β βββ migration.md # Upgrading existing code
βββ examples/ # Complete working examples
β βββ CHAT_WITH_TOOLS.md # Interactive chat with tools
βββ agent/ # For LLM agents and code assistants
βββ SPEC.agent.md # Formal specification
βββ PATTERNS.agent.md # Runnable patterns
βββ API-SPEC.agent.md # Formal API specification
βββ METADATA-API.agent.md # Metadata/introspection specification
For LLM agents and automated code assistants - Machine-optimized specifications:
Start here: AGENT.md - Entry point with workflows, quick reference, and navigation
| Document | Purpose |
|---|---|
| docs/agent/core-SPEC.agent.md | 15 normative rules, 7 invariants, verification checklist |
| docs/agent/core-PATTERNS.agent.md | 14 complete, runnable patterns |
| docs/agent/core-API-SPEC.agent.md | Formal signatures and state machines |
| docs/agent/metadata-API-SPEC.agent.md | 10 normative rules, 5 invariants, 10 complete patterns for metadata/introspection API |
| docs/agent/streaming-observability-PATTERNS.agent.md | Streaming and observability patterns |
| docs/agent/streaming-observability-API-SPEC.agent.md | Streaming and observability API specification |
See docs/agent/README.md for agent documentation index.
| Provider | Text Completion | Embeddings | Tools | Streaming |
|---|---|---|---|---|
| Anthropic (Claude) | β | β | β (native) | β |
| OpenAI (GPT-4, etc.) | β | β | β (function calling) | β |
| Google Gemini | β | β | β (function calling) | β |
| Ollama (local models) | β | β | β (OpenAI-compatible) | β |
| OpenRouter | β | β | β (multi-provider) | β |
| OpenAI-compatible (Groq, Together, vLLM) | β | β | β | β |
- Message Normalization - Convert between provider formats automatically
- Streaming Responses - Real-time token-by-token output with callbacks
- Provider Introspection - Query capabilities, model metadata, and configuration without trial-and-error
- Token Counting - Accurate usage tracking for cost estimation
- Performance Profiling - Optional timing breakdown for optimization
- Observability Hooks - Before/after request callbacks for logging, metrics, debugging
- Comprehensive Error Handling - Restarts for rate limits, auth failures, API errors
- Configuration via Lisp - Full power of Lisp for provider setup
- Thread-Safe - Safe for concurrent requests across threads
- Opt-in Design - Load config only when you want it; defaults are sensible
Comprehensive test suite included: 423 tests, 100% passing.
Test categories:
- Provider protocols and request/response handling
- Token counting and metadata extraction
- Tool definition and tool calling workflows
- Error handling and recovery
- Configuration and defaults
Run tests:
sbcl --noinform --non-interactive --load tests/test-tools-support.lisp
sbcl --noinform --non-interactive --load tests/test-provider-protocols.lisp
sbcl --noinform --non-interactive --load tests/test-token-metadata-comprehensive.lispSee tests/README.md for complete test documentation.
These features are intentionally deferred to future versions:
- Audio/video/image processing
- Automatic tool execution loops
- Cost tracking and billing
- Built-in conversation memory management
- alexandria - General utilities
- serapeum - Additional utilities
- dexador - HTTP client
- yason - JSON parsing
- uiop - OS interface
- bordeaux-threads - Thread safety
- cl-ppcre - Regular expressions
All are standard, well-maintained libraries available via Quicklisp.
Contributions welcome! Please ensure:
- Code follows existing style conventions
- All 423 tests pass
- New features include tests
- Documentation is updated
MIT License - see LICENSE file for details.
quasi / quasiLabs
Design inspired by Python's LiteLLM and aisuite libraries, adapted for idiomatic Common Lisp.