📢 [Basic Local Deployment] [External Postgres Deployment] [Production VPS Deployment]
Evercore is an orchestration engine for managing long-running LLM agents.
It makes it super easy to define an agent as a combination of prompt, memory and tools and it manages ticket lifecycles, task dependencies, and distributed worker loops for you.
By decoupling state management and declarative YAML workflows from your core application logic, Evercore provides a robust, pluggable foundation that simplifies the development and scaling of asynchronous, multi-agent systems.
| Area | evercore |
Temporal | UseWorkflow.dev | Prefect |
|---|---|---|---|---|
| Main value | Embed a workflow engine directly in your app with minimal infrastructure | Maximum durability and correctness for long-running workflows | Durable TypeScript workflows with agent-friendly DX | Fast path to production orchestration for Python/data teams |
| Workflow functionality | Tickets/tasks, stage transitions, approvals, pause/resume, retries, event inbox | Full workflow model with signals, queries, updates, child workflows | Durable function-style workflows with resume/event patterns | Flows/tasks, retries, state tracking, deployments, automations |
| Scheduling | Built-in interval/one-shot schedule runner | First-class schedules with advanced policies | Code-level timing/sleep primitives | Strong deployment scheduling and automation features |
| Operational complexity | Lowest: lightweight and app-embedded | Highest: most powerful, more platform overhead | Medium: modern and fast-moving ecosystem | Medium: managed experience with solid operational tooling |
Evercore is: embeddable, lightweight, fully python and comes with batteries included for AI agent flows.
- Configure environment:
cp .env.example .env 2>/dev/null || true
export EVERCORE_DATABASE_URL="sqlite:///./evercore.db"
export EVERCORE_WORKFLOW_DIR="./workflows"
export EVERCORE_DEFAULT_WORKFLOW_KEY="default_ticket"- Configure lemlem models:
export LEMLEM_MODELS_CONFIG_PATH="/abs/path/to/models_config.yaml"Or configure a database-backed model config source for dynamic model/preset updates.
- Install and run API:
uv sync --project evercore
uv run --project evercore evercore-api- Run worker in a second terminal:
uv run --project evercore evercore-workerEVERCORE_WORKER_ID is optional. If unset, Evercore now generates a process-unique default (evercore-worker-<hostname>-<pid>). For multi-container deployments, explicitly setting a stable unique worker id per container is still recommended.
- Create a ticket:
curl -s -X POST http://localhost:8010/tickets \
-H 'content-type: application/json' \
-d '{"title":"Hello","workflow_key":"default_ticket"}'- Enqueue a lemlem task:
curl -s -X POST http://localhost:8010/tickets/<ticket_id>/tasks \
-H 'content-type: application/json' \
-d '{
"task_key":"lemlem_prompt",
"payload":{
"model":"openrouter:gemini-2.5-flash",
"prompt":"Write a short test summary for this ticket"
}
}'Run the full standalone Evercore library suite:
uv run --project libs/evercore evercore-testRun a subset by pattern:
uv run --project libs/evercore evercore-test --pattern "test_worker*.py"If you run Evercore outside this monorepo, ensure the lemlem dependency is resolvable (for example from your package index, or by adding a local/path source in your own pyproject.toml).
- Add a workflow YAML in
workflows/ - Define your own task keys in your app layer
- Register custom executors in
evercore/executors/registry.py - Create tickets/tasks through the API
- For long-running custom executors, add
execute_with_control(ticket, task, control)and checkcontrol.should_stop()periodically for cooperative pause/cancel support
Evercore uses lemlem for model routing and LLM calls.
lemlem supports two model/preset configuration sources:
- YAML/JSON file (for example via
LEMLEM_MODELS_CONFIG_PATH) - database-backed model config service (dynamic runtime loading)