You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: CHANGELOG.md
+127Lines changed: 127 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,6 +5,133 @@ All notable changes to this project will be documented in this file.
5
5
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
6
6
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
7
7
8
+
## [1.1.0] - 2026-02-18
9
+
10
+
### Added
11
+
12
+
#### Custom Content Policy Enforcement
13
+
You can now enforce your own content rules on top of LockLLM's built-in security. Create custom policies in the [dashboard](https://www.lockllm.com/policies), and the SDK will automatically check prompts against them. When a policy is violated, you'll get a `PolicyViolationError` with the exact policy name, violated categories, and details.
14
+
15
+
```python
16
+
from lockllm import create_openai, PolicyViolationError
17
+
18
+
try:
19
+
openai.chat.completions.create(...)
20
+
except PolicyViolationError as e:
21
+
print(e.violated_policies)
22
+
# [{"policy_name": "No competitor mentions", "violated_categories": [...]}]
23
+
```
24
+
25
+
#### AI Abuse Detection
26
+
Protect your endpoints from automated misuse. When enabled, LockLLM detects bot-generated content, repetitive prompts, and resource exhaustion attacks. If abuse is detected, you'll get an `AbuseDetectedError` with confidence scores and detailed indicator breakdowns.
27
+
28
+
```python
29
+
from lockllm import create_openai, ProxyOptions
30
+
31
+
openai = create_openai(
32
+
api_key=os.getenv("LOCKLLM_API_KEY"),
33
+
proxy_options=ProxyOptions(abuse_action="block")
34
+
)
35
+
```
36
+
37
+
#### Credit Balance Awareness
38
+
The SDK now returns a dedicated `InsufficientCreditsError` when your balance is too low for a request. The error includes your `current_balance` and the `estimated_cost`, so you can handle billing gracefully in your application.
39
+
40
+
#### Scan Modes and Actions
41
+
Control exactly what gets checked and what happens when threats are found:
result = lockllm.scan(input=user_prompt, scan_options=opts)
65
+
```
66
+
67
+
#### Proxy Options on All Wrappers
68
+
All wrapper functions (`create_openai`, `create_anthropic`, `create_groq`, etc. and their `create_async_*` variants) now accept a `proxy_options` parameter so you can configure security behavior at initialization time:
69
+
70
+
```python
71
+
from lockllm import create_openai, ProxyOptions
72
+
73
+
openai = create_openai(
74
+
api_key=os.getenv("LOCKLLM_API_KEY"),
75
+
proxy_options=ProxyOptions(
76
+
scan_mode="combined",
77
+
scan_action="block",
78
+
policy_action="block",
79
+
route_action="auto",
80
+
cache_response=True,
81
+
cache_ttl=3600,
82
+
)
83
+
)
84
+
```
85
+
86
+
#### Intelligent Routing
87
+
Let LockLLM automatically select the best model for each request based on task type and complexity. Set `route_action="auto"` to enable, or `route_action="custom"` to use your own routing rules from the dashboard.
88
+
89
+
#### Response Caching
90
+
Reduce costs by caching identical LLM responses. Enabled by default in proxy mode - disable it with `cache_response=False` or customize the TTL with `cache_ttl`.
91
+
92
+
#### Universal Proxy Mode
93
+
Access 200+ models without configuring individual provider API keys using `get_universal_proxy_url()`. Uses LockLLM credits instead of BYOK.
94
+
95
+
```python
96
+
from lockllm import get_universal_proxy_url
97
+
url = get_universal_proxy_url()
98
+
# 'https://api.lockllm.com/v1/proxy'
99
+
```
100
+
101
+
#### Proxy Response Metadata
102
+
New utilities to read detailed metadata from proxy responses - scan results, routing decisions, cache status, and credit usage:
103
+
104
+
```python
105
+
from lockllm import parse_proxy_metadata
106
+
metadata = parse_proxy_metadata(response_headers)
107
+
# metadata.safe, metadata.routing, metadata.cache_status, metadata.credits_deducted, etc.
108
+
```
109
+
110
+
#### Expanded Scan Response
111
+
Scan responses now include richer data when using advanced features:
112
+
-`policy_warnings` - Which custom policies were violated and why
113
+
-`scan_warning` - Injection details when using `allow_with_warning`
114
+
-`abuse_warnings` - Abuse indicators when abuse detection is enabled
115
+
-`routing` - Task type, complexity score, and selected model when routing is enabled
116
+
-`policy_confidence` - Separate confidence score for policy checks
117
+
118
+
#### New Type Exports
119
+
-`ProxyOptions` - Configuration dataclass for proxy wrapper options
120
+
-`ProxyResponseMetadata` - Typed dataclass for parsed proxy response metadata
-`TaskType`, `ComplexityTier` - Routing type aliases
124
+
125
+
### Changed
126
+
- The scan API is fully backward compatible - existing code works without changes. Internally, scan configuration is now sent via HTTP headers for better compatibility and caching behavior.
127
+
-`ScanResult.confidence` and `ScanResult.injection` are now `Optional[float]` (they are `None` in `policy_only` mode where core injection scanning is skipped).
128
+
129
+
### Notes
130
+
- All new features are opt-in. Existing integrations continue to work without any changes.
131
+
- Custom policies, abuse detection, and routing are configured in the [LockLLM dashboard](https://www.lockllm.com/dashboard).
0 commit comments