Skip to content

Commit b2cfc59

Browse files
Updated documentation
1 parent c5e4798 commit b2cfc59

2 files changed

Lines changed: 277 additions & 36 deletions

File tree

CHANGELOG.md

Lines changed: 127 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -5,6 +5,133 @@ All notable changes to this project will be documented in this file.
55
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
66
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
77

8+
## [1.1.0] - 2026-02-18
9+
10+
### Added
11+
12+
#### Custom Content Policy Enforcement
13+
You can now enforce your own content rules on top of LockLLM's built-in security. Create custom policies in the [dashboard](https://www.lockllm.com/policies), and the SDK will automatically check prompts against them. When a policy is violated, you'll get a `PolicyViolationError` with the exact policy name, violated categories, and details.
14+
15+
```python
16+
from lockllm import create_openai, PolicyViolationError
17+
18+
try:
19+
openai.chat.completions.create(...)
20+
except PolicyViolationError as e:
21+
print(e.violated_policies)
22+
# [{"policy_name": "No competitor mentions", "violated_categories": [...]}]
23+
```
24+
25+
#### AI Abuse Detection
26+
Protect your endpoints from automated misuse. When enabled, LockLLM detects bot-generated content, repetitive prompts, and resource exhaustion attacks. If abuse is detected, you'll get an `AbuseDetectedError` with confidence scores and detailed indicator breakdowns.
27+
28+
```python
29+
from lockllm import create_openai, ProxyOptions
30+
31+
openai = create_openai(
32+
api_key=os.getenv("LOCKLLM_API_KEY"),
33+
proxy_options=ProxyOptions(abuse_action="block")
34+
)
35+
```
36+
37+
#### Credit Balance Awareness
38+
The SDK now returns a dedicated `InsufficientCreditsError` when your balance is too low for a request. The error includes your `current_balance` and the `estimated_cost`, so you can handle billing gracefully in your application.
39+
40+
#### Scan Modes and Actions
41+
Control exactly what gets checked and what happens when threats are found:
42+
43+
- **Scan modes** - Choose `normal` (core security only), `policy_only` (custom policies only), or `combined` (both)
44+
- **Actions per detection type** - Set `block` or `allow_with_warning` independently for core scans, custom policies, and abuse detection
45+
- **Abuse detection** is opt-in - disabled by default, enable it with `abuse_action`
46+
47+
```python
48+
result = lockllm.scan(
49+
input=user_prompt,
50+
scan_mode="combined",
51+
sensitivity="high",
52+
scan_action="block",
53+
policy_action="allow_with_warning",
54+
abuse_action="block",
55+
)
56+
```
57+
58+
You can also use the `ScanOptions` dataclass for reusable configurations:
59+
60+
```python
61+
from lockllm import ScanOptions
62+
63+
opts = ScanOptions(scan_mode="combined", scan_action="block")
64+
result = lockllm.scan(input=user_prompt, scan_options=opts)
65+
```
66+
67+
#### Proxy Options on All Wrappers
68+
All wrapper functions (`create_openai`, `create_anthropic`, `create_groq`, etc. and their `create_async_*` variants) now accept a `proxy_options` parameter so you can configure security behavior at initialization time:
69+
70+
```python
71+
from lockllm import create_openai, ProxyOptions
72+
73+
openai = create_openai(
74+
api_key=os.getenv("LOCKLLM_API_KEY"),
75+
proxy_options=ProxyOptions(
76+
scan_mode="combined",
77+
scan_action="block",
78+
policy_action="block",
79+
route_action="auto",
80+
cache_response=True,
81+
cache_ttl=3600,
82+
)
83+
)
84+
```
85+
86+
#### Intelligent Routing
87+
Let LockLLM automatically select the best model for each request based on task type and complexity. Set `route_action="auto"` to enable, or `route_action="custom"` to use your own routing rules from the dashboard.
88+
89+
#### Response Caching
90+
Reduce costs by caching identical LLM responses. Enabled by default in proxy mode - disable it with `cache_response=False` or customize the TTL with `cache_ttl`.
91+
92+
#### Universal Proxy Mode
93+
Access 200+ models without configuring individual provider API keys using `get_universal_proxy_url()`. Uses LockLLM credits instead of BYOK.
94+
95+
```python
96+
from lockllm import get_universal_proxy_url
97+
url = get_universal_proxy_url()
98+
# 'https://api.lockllm.com/v1/proxy'
99+
```
100+
101+
#### Proxy Response Metadata
102+
New utilities to read detailed metadata from proxy responses - scan results, routing decisions, cache status, and credit usage:
103+
104+
```python
105+
from lockllm import parse_proxy_metadata
106+
metadata = parse_proxy_metadata(response_headers)
107+
# metadata.safe, metadata.routing, metadata.cache_status, metadata.credits_deducted, etc.
108+
```
109+
110+
#### Expanded Scan Response
111+
Scan responses now include richer data when using advanced features:
112+
- `policy_warnings` - Which custom policies were violated and why
113+
- `scan_warning` - Injection details when using `allow_with_warning`
114+
- `abuse_warnings` - Abuse indicators when abuse detection is enabled
115+
- `routing` - Task type, complexity score, and selected model when routing is enabled
116+
- `policy_confidence` - Separate confidence score for policy checks
117+
118+
#### New Type Exports
119+
- `ProxyOptions` - Configuration dataclass for proxy wrapper options
120+
- `ProxyResponseMetadata` - Typed dataclass for parsed proxy response metadata
121+
- `ScanOptions`, `ScanMode`, `ScanAction`, `RouteAction` - Scan configuration types
122+
- `PolicyViolation`, `ViolatedCategory`, `ScanWarning`, `AbuseWarning`, `RoutingInfo` - Scan response types
123+
- `TaskType`, `ComplexityTier` - Routing type aliases
124+
125+
### Changed
126+
- The scan API is fully backward compatible - existing code works without changes. Internally, scan configuration is now sent via HTTP headers for better compatibility and caching behavior.
127+
- `ScanResult.confidence` and `ScanResult.injection` are now `Optional[float]` (they are `None` in `policy_only` mode where core injection scanning is skipped).
128+
129+
### Notes
130+
- All new features are opt-in. Existing integrations continue to work without any changes.
131+
- Custom policies, abuse detection, and routing are configured in the [LockLLM dashboard](https://www.lockllm.com/dashboard).
132+
133+
---
134+
8135
## [1.0.0] - 2025-01-17
9136

10137
### Added

0 commit comments

Comments
 (0)