Dynamic authentication with client-provided client_id and client_secret
This project implements a clean, production-ready OAuth2 proxy in front of Keycloak.
The backend does not store client credentials for login/refresh/logout.
Instead, the client sends them in each authentication request, making the system flexible, multi-tenant, and secure.
For opaque token introspection, the resource server uses service credentials (env vars) to validate tokens and enable immediate logout.
Supported features:
- 🔑 Username/password login
- 🔄 Token refresh
- 🚪 Logout (refresh token revocation)
- 🔐 DPoP support (proof forwarding to Keycloak + proof validation for protected endpoints)
- 🛡 Opaque token validation via Spring Security introspection
- 🎭 Role-based authorization (ADMIN, CLIENT_CREATE, CLIENT_GET, CLIENT_SEARCH, UPDATE_BALANCE)
- 🔎 Client search endpoint with dedicated role
CLIENT_SEARCH - 🚦 Configurable rate limiting (Bucket4j)
- ⏱ Persistent Quartz scheduler for asynchronous client creation requests
- 💳 Account balance endpoints with pessimistic/optimistic locking
- 📬 Request-status endpoint for asynchronous operations
- 🧪 Full integration test suite
- 📦 Automatic Keycloak realm import (users, roles, mappers)
- 🧪 WireMock for negative testing (network failures, timeouts, error responses)
- 📊 OpenTelemetry tracing and JSON logging
- 📈 Prometheus metrics via Actuator
- 📚 Swagger/OpenAPI documentation
- Java 25
- Spring Boot 4.0.3
- Spring Security (Resource Server)
- Spring Web (REST)
- Keycloak 26+
- Bucket4j core + custom servlet filter
- Quartz Scheduler (JDBC job store)
- JUnit Jupiter 6 + RestTemplate-based integration tests
- Testcontainers (Keycloak, PostgreSQL)
- WireMock (Negative testing)
- OpenTelemetry + JSON logging (
logstash-logback-encoder) - Micrometer + Prometheus
- Docker Compose
Baseline verified on JDK 25 with
Lombok 1.18.44andJaCoCo 0.8.14.
Create a .env file from the template and set real values:
copy .env.example .envRequired variables:
KEYCLOAK_DB_PASSWORDAPP_DB_PASSWORDKEYCLOAK_ADMIN_PASSWORDGRAFANA_ADMIN_PASSWORD
Resource server introspection credentials (confidential client in Keycloak):
KEYCLOAK_RESOURCE_CLIENT_IDKEYCLOAK_RESOURCE_CLIENT_SECRET
docker-compose.yaml- shared stack (databases, Keycloak, observability, commonjwt-demosettings)docker-compose.dev.yaml- dev override forjwt-demo(Maven container + source mount + watch)docker-compose.prod.yaml- prod override forjwt-demo(image build fromDockerfile)
Start services:
docker compose -f docker-compose.yaml -f docker-compose.dev.yaml up --buildEnable code watching (run in another terminal):
docker compose -f docker-compose.yaml -f docker-compose.dev.yaml watchIf watch is not supported by your Docker Compose version, keep up --build running and restart only the app container manually after code changes:
docker compose -f docker-compose.yaml -f docker-compose.dev.yaml restart jwt-demoKeycloak automatically imports:
- realm
my-realm - users (
user,admin) - roles (
ADMIN,CLIENT_CREATE,CLIENT_GET,CLIENT_SEARCH,UPDATE_BALANCE,offline_access) - client
spring-app - protocol mappers (roles → access_token)
docker compose -f docker-compose.yaml -f docker-compose.prod.yaml up -d --builddocker compose up -d postgres postgres-app keycloakmvn spring-boot:runDEV mode:
docker compose -f docker-compose.yaml -f docker-compose.dev.yaml down
docker compose -f docker-compose.yaml -f docker-compose.dev.yaml down -vPROD mode:
docker compose -f docker-compose.yaml -f docker-compose.prod.yaml down
docker compose -f docker-compose.yaml -f docker-compose.prod.yaml down -vKeycloak UI:
http://localhost:8080
Application runs at:
http://localhost:8081
Swagger UI:
http://localhost:8081/swagger-ui/index.html
OpenAPI JSON:
http://localhost:8081/v3/api-docs
The OpenAPI spec is generated automatically at runtime. Use Swagger UI to explore and try endpoints.
Helpful links:
- Swagger UI:
http://localhost:8081/swagger-ui/index.html - OpenAPI JSON:
http://localhost:8081/v3/api-docs
server.port=8081
keycloak.realm=my-realm
keycloak.auth-server-url=http://localhost:8080
# Service credentials for introspection (resource server)
keycloak.resource-client-id=${KEYCLOAK_RESOURCE_CLIENT_ID}
keycloak.resource-client-secret=${KEYCLOAK_RESOURCE_CLIENT_SECRET}
keycloak.token-url=${keycloak.auth-server-url}/realms/${keycloak.realm}/protocol/openid-connect/token
keycloak.logout-url=${keycloak.auth-server-url}/realms/${keycloak.realm}/protocol/openid-connect/logout
keycloak.introspection-url=${keycloak.auth-server-url}/realms/${keycloak.realm}/protocol/openid-connect/token/introspect
spring.security.oauth2.resourceserver.opaque-token.introspection-uri=${keycloak.introspection-url}
spring.security.oauth2.resourceserver.opaque-token.client-id=${keycloak.resource-client-id}
spring.security.oauth2.resourceserver.opaque-token.client-secret=${keycloak.resource-client-secret}
# DPoP validation
security.dpop.enabled=true
security.dpop.max-proof-age=5m
security.dpop.clock-skew=30s
security.dpop.replay-cache-size=100000
# Persistent Quartz jobs in PostgreSQL
spring.quartz.job-store-type=jdbc
spring.quartz.jdbc.initialize-schema=never
spring.quartz.properties.org.quartz.scheduler.instanceName=jwt-demo-scheduler
spring.quartz.properties.org.quartz.jobStore.class=org.springframework.scheduling.quartz.LocalDataSourceJobStore
spring.quartz.properties.org.quartz.jobStore.driverDelegateClass=org.quartz.impl.jdbcjobstore.PostgreSQLDelegate
spring.quartz.properties.org.quartz.jobStore.tablePrefix=QRTZ_
spring.quartz.properties.org.quartz.jobStore.isClustered=true
# Rate limiting
app.rate-limit.login-path=/api/auth/login
app.rate-limit.clients-path-prefix=/api/clients
app.rate-limit.rate-limited-client-id=spring-app
app.rate-limit.login.capacity=5
app.rate-limit.login.window-seconds=60
app.rate-limit.clients.capacity=20
app.rate-limit.clients.window-seconds=60The backend does not store client credentials for login/refresh/logout.
The resource server uses service credentials (env vars) for introspection.
POST /api/clients accepts a request for background processing instead of creating the client row synchronously.
- The API validates the request body.
- A new
Requestrow is persisted withtype=CLIENT_CREATEandstatus=PENDING. - The original payload is stored in PostgreSQL
jsonb(request_data). - A persistent Quartz job is created in PostgreSQL (
QRTZ_*tables). - A Quartz worker changes the request status to
PROCESSINGand executes the existing client creation business logic. - On success, the request becomes
COMPLETEDandresponse_datastores the final JSON response. - On failure, the request becomes
FAILEDandresponse_datastores the error JSON response. - The caller polls
GET /api/requests/{requestId}until processing finishes.
This makes processing durable across application restarts while keeping the public API responsive.
Use the prepared SQL script to populate 1000 demo clients and matching zero-balance accounts:
\i src/main/resources/db/seed/clients_1000_fake.sqlThe script is idempotent and can be re-run safely.
For best /api/clients/search performance, install the PostgreSQL pg_trgm extension at the infrastructure level.
The Flyway migration creates trigram indexes only when pg_trgm is already available.
Unit test coverage report:
mvn testIntegration test coverage report:
mvn verifyReports are generated at:
target/site/jacoco/index.htmltarget/site/jacoco-it/index.html
flowchart LR
Client[Client<br/>Frontend] -->|username/password<br/>clientId/clientSecret| Proxy[Spring Boot Proxy<br/>This project]
Proxy -->|/token, /logout| Keycloak[Keycloak<br/>Auth Server]
Keycloak -->|tokens / logout result| Proxy
Proxy -->|AppResponse| Client
The project uses a clean, layered security architecture combining:
- Keycloak for authentication and role assignment
- Spring Security for opaque token introspection
- DPoP proof validation (
Authorization: DPoP <token>+DPoPheader) - Method-level authorization via
@PreAuthorize - A custom role converter for mapping Keycloak roles to Spring authorities
This ensures a clear separation of responsibilities:
| Layer | Responsibility |
|---|---|
| Keycloak | Authentication, issuing tokens, storing users, roles, and mappers |
| Spring Security | Introspecting tokens, extracting authorities, enforcing access rules |
| Controllers | Declaring authorization rules via annotations |
- Client sends username/password + clientId/clientSecret to
/api/auth/login - Backend forwards credentials to Keycloak
/token - Keycloak returns:
- access_token
- refresh_token
- Backend returns tokens to the client
- Client uses access_token for all protected endpoints
- For
/api/auth/login,/api/auth/refresh,/api/auth/logoutyou can passDPoP: <proof-jwt>; backend forwards it to Keycloak. - Protected endpoints accept:
Authorization: Bearer <access_token>(existing flow)Authorization: DPoP <access_token>+DPoP: <proof-jwt>(DPoP flow)
- If introspection returns
cnf.jkt, protected endpoints require valid DPoP proof (htm,htu,iat,jti,ath, signature, thumbprint binding).
sequenceDiagram
autonumber
actor Client
participant SpringBoot as Spring Boot (AuthController)
participant Keycloak
Client->>SpringBoot: POST /api/auth/login\n{username, password, clientId, clientSecret}
SpringBoot->>Keycloak: KeycloakAuthService.login()
Keycloak->>Keycloak: POST /realms/my-realm/protocol/openid-connect/token\ngrant_type=password\nusername, password\nclient_id, client_secret
Keycloak-->>SpringBoot: 200 OK\n{access_token, refresh_token}
SpringBoot-->>Client: AppResponse<AuthResponse>
sequenceDiagram
autonumber
actor Client
participant SpringBoot as Spring Boot (AuthController)
participant Keycloak
Client->>SpringBoot: POST /api/auth/refresh\n{refreshToken, clientId, clientSecret}
SpringBoot->>Keycloak: KeycloakAuthService.refresh()
Keycloak->>Keycloak: POST /realms/my-realm/protocol/openid-connect/token\ngrant_type=refresh_token\nrefresh_token\nclient_id, client_secret
Keycloak-->>SpringBoot: 200 OK\n{new_access_token, new_refresh_token}
SpringBoot-->>Client: AppResponse<AuthResponse>
sequenceDiagram
autonumber
actor Client
participant SpringBoot as Spring Boot (AuthController)
participant Keycloak
Client->>SpringBoot: POST /api/auth/logout\n{refreshToken, clientId, clientSecret}
SpringBoot->>Keycloak: KeycloakAuthService.logout()
Keycloak->>Keycloak: POST /realms/my-realm/protocol/openid-connect/logout\nclient_id, client_secret\nrefresh_token
Keycloak-->>SpringBoot: 200 OK (always)
SpringBoot-->>Client: AppResponse(success=true)
| Role | Description |
|---|---|
ADMIN |
Administrative user |
CLIENT_CREATE |
Role used to allow creating clients |
CLIENT_GET |
Role used to allow reading client data |
UPDATE_BALANCE |
Role used to allow account balance updates |
Assignments:
user→CLIENT_CREATE,CLIENT_GET,UPDATE_BALANCEadmin→ADMIN
Keycloak → Spring Security:
ADMIN -> ROLE_ADMIN
CLIENT_CREATE -> has role CLIENT_CREATE (checked via @PreAuthorize("hasRole('CLIENT_CREATE')"))
CLIENT_GET -> has role CLIENT_GET (checked via @PreAuthorize("hasRole('CLIENT_GET')"))
UPDATE_BALANCE -> has role UPDATE_BALANCE (checked via @PreAuthorize("hasRole('UPDATE_BALANCE')"))
| User | POST /api/clients |
GET /api/requests/{id} |
GET /api/clients/{id} |
GET /api/accounts/client/{clientId} |
POST /api/accounts/balance/pessimistic |
POST /api/accounts/balance/optimistic |
|---|---|---|---|---|---|---|
| user | ✅ CLIENT_CREATE | ✅ CLIENT_CREATE | ✅ CLIENT_GET | ✅ CLIENT_GET | ✅ UPDATE_BALANCE | ✅ UPDATE_BALANCE |
| admin | ❌ Forbidden | ❌ Forbidden | ❌ Forbidden | ❌ Forbidden | ❌ Forbidden | ❌ Forbidden |
To add new roles:
- Create a realm role in Keycloak
- Assign it to users
- Protect endpoints:
@RestController
class ExampleController {
@PreAuthorize("hasRole('MANAGER')")
public AppResponse<Void> example() {
return AppResponse.ok(null);
}
}No changes required in SecurityConfig.
POST /api/auth/login
Optional header for DPoP token issuance:
DPoP: <proof-jwt>
{
"username": "user",
"password": "password",
"clientId": "spring-app",
"clientSecret": "CHANGE_ME"
}POST /api/auth/refresh
Optional header:
DPoP: <proof-jwt>
{
"refreshToken": "...",
"clientId": "spring-app",
"clientSecret": "CHANGE_ME"
}POST /api/auth/logout
Optional header:
DPoP: <proof-jwt>
{
"refreshToken": "...",
"clientId": "spring-app",
"clientSecret": "CHANGE_ME"
}POST /api/accounts/balance/pessimistic
Requires bearer token role: UPDATE_BALANCE
{
"clientId": 1,
"amount": 100.50
}POST /api/accounts/balance/optimistic
Requires bearer token role: UPDATE_BALANCE
{
"clientId": 1,
"amount": -50.00
}GET /api/requests/{requestId}
Requires bearer token role: CLIENT_CREATE
GET /api/accounts/client/{clientId}
Requires bearer token role: CLIENT_GET
Authorization options:
Authorization: Bearer <access_token>Authorization: DPoP <access_token>andDPoP: <proof-jwt>
| Endpoint | Method | Required role |
|---|---|---|
/api/clients |
POST |
CLIENT_CREATE |
/api/requests/{id} |
GET |
CLIENT_CREATE |
/api/clients/{id} |
GET |
CLIENT_GET |
/api/accounts/client/{clientId} |
GET |
CLIENT_GET |
/api/accounts/balance/pessimistic |
POST |
UPDATE_BALANCE |
/api/accounts/balance/optimistic |
POST |
UPDATE_BALANCE |
Rate limits are implemented with Bucket4j core and a custom servlet filter compatible with Spring Boot 4.
They are configured in application.properties via app.rate-limit.* properties.
Example:
app.rate-limit.login-path=/api/auth/login
app.rate-limit.clients-path-prefix=/api/clients
app.rate-limit.rate-limited-client-id=spring-app
app.rate-limit.login.capacity=5
app.rate-limit.login.window-seconds=60
app.rate-limit.clients.capacity=20
app.rate-limit.clients.window-seconds=60The project includes comprehensive integration tests using Testcontainers and WireMock:
Uses real Keycloak container via Testcontainers to verify:
- ✅ Successful login (user and admin)
- ✅ Successful refresh token flow
- ✅ Successful logout
- ❌ Login with wrong password
- ❌ Login with unknown user
- ❌ Refresh with invalid token
- ❌ Logout with invalid token
- 🛡 Role-based access control (USER/ADMIN)
- 🔐 JWT validation and protected endpoints
Uses WireMock to simulate Keycloak failures:
- 🔥 Keycloak server errors (500)
- ⏱ Connection timeouts
- 📡 Network failures (connection reset)
- 🚫 Malformed JSON responses
- 📭 Empty responses (204 No Content)
- 🔐 Invalid credentials
- 🔒 Disabled accounts
- 🔑 Invalid client credentials
- 🎫 Invalid/expired refresh tokens
- 🚪 Logout errors
Tests DTO validation for authentication endpoints.
Run all tests:
mvn testRun integration tests only:
mvn verifyRun specific test:
mvn test -Dtest=KeycloakNegativeITNote: The integration-test helpers in src/test/java/lt/satsyuk/api/util/AbstractIntegrationTest.java were recently improved to make writing and maintaining tests easier and more robust.
-
postAndGetData(String url, String token, Object body, Class<T> clazz)- Sends POST to
urlwith optional Bearertoken, asserts HTTP 200, and converts responseAppResponse.dataintoclazzusing the autowiredObjectMapper.
- Sends POST to
-
getAndGetData(String url, String token, Class<T> clazz)- Same as above for GET requests.
-
assertErrorStatusAndBody(ResponseEntity<AppResponse<T>> resp, HttpStatus expectedStatus, int expectedCode, Object expectedMessage)- Helper for negative tests: checks HTTP status, AppResponse.code, and message (supports String or Set for validation errors).
Why use them
- They avoid unsafe unchecked casts (LinkedHashMap → POJO) by converting raw
datainto the requested DTO using Jackson. - They make positive test code concise and resilient to deserialization differences.
Example (creating a client and then fetching it by id):
class ClientIntegrationExample {
void createsAndFetchesClient() {
CreateClientRequest req = new CreateClientRequest("John", "Doe", "+37061234567");
ClientResponse created = postAndGetData(clientUrl, token, req, ClientResponse.class);
assertThat(created.id()).isNotNull();
ClientResponse fetched = getAndGetData(clientUrl + "/" + created.id(), token, ClientResponse.class);
assertThat(fetched.id()).isEqualTo(created.id());
assertThat(fetched.phone()).isEqualTo(created.phone());
}
}Additional notes
- If you need to work with
ResponseEntity<AppResponse<T>>directly, use an explicitParameterizedTypeReference<AppResponse<T>>. - For negative tests use
assertErrorStatusAndBody(...)to validate HTTP status, API error code and error message(s).
- Run all unit tests:
mvn test- Run all integration tests (including Testcontainers):
mvn clean verify -DskipTests=false -DtrimStackTrace=false- Run a single integration test class (example:
ClientIntegrationIT):
mvn -DskipTests=false "-Dit.test=ClientIntegrationIT" verify -DtrimStackTrace=falseNotes:
- Ensure Docker is running for Testcontainers-based ITs (Keycloak/Postgres). Tests use assumptions and will skip if required containers are not available.
- If you encounter ClassCastException (e.g. LinkedHashMap cannot be cast to MyDto), prefer using the helpers above or ensure the request uses an explicit ParameterizedTypeReference<AppResponse>.
Create a feature branch and push it:
git checkout -b add_some_feature
git add -A
git commit -m "tests: refactor helpers and cleanup"
git push -u origin add_some_featureCreate a pull request with GitHub CLI (optional):
gh pr create --base master --head add_some_feature --title "tests: refactor helpers & cleanup" --body-file pr_description.mdIf gh is not available, open the GitHub UI and create a PR from your pushed branch into master.
If you'd like, I can also add a short pr_description.md file to the branch containing a ready-to-paste PR body (title and description) or create the branch/PR for you (requires git/gh access from this environment).
Below is a short project structure: key files and folders with a brief purpose.
pom.xml— Maven build configuration and dependencies.Dockerfile,docker-compose.yaml— containerization and local environment (Keycloak, Prometheus, Grafana, Tempo).src/main/java/lt/satsyuk/— application source code: controllers, services, jobs, configurations, and security logic.- notable packages:
controller,dto,auth(Keycloak integration),config,job,exception,security.
- notable packages:
src/main/resources/— configurations and resources (application.properties,logback-spring.xml, Flyway migrations).src/test/— unit and integration tests (Testcontainers, WireMock).keycloak/— exported Keycloak realm for local import (realm-export.json).grafana/— provisioning dashboards and datasources for Grafana.postman/— Postman collections for manual API testing.prometheus.yml,tempo.yaml— monitoring and tracing configurations.target/— build artifacts (ignored in VCS).
Check token contains:
{
"realm_access": { "roles": ["CLIENT_GET", "CLIENT_CREATE", "UPDATE_BALANCE"] }
}If missing → check Keycloak mappers.
For DPoP-bound access tokens:
- use
Authorization: DPoP <access_token> - send
DPoPproof for every protected request - ensure proof matches method+URL and is signed by key matching
cnf.jkt
If request fails and the response includes X-Trace-Id, use it to find correlated logs/traces.
The API may include X-Trace-Id response header for request correlation when a trace id is available.
Use this header value, when present, when investigating 401, 403, and 429 responses in Grafana/Tempo/Loki.
Keycloak 26 always returns 200 for /logout.
Your API wraps this into a structured response.
Integration tests require Docker to run Testcontainers.
- Install Docker Desktop
- Windows users: Use Docker Desktop version 4.28.x for stable Testcontainers support
- Newer versions (4.29+) may have compatibility issues with Testcontainers
- Download older versions from Docker Desktop release notes
- Ensure Docker is running
- Tests will be skipped if Docker is unavailable
Metrics (Prometheus format):
http://localhost:8081/actuator/prometheus
Health check:
http://localhost:8081/actuator/health
OTLP-first Observability (recommended)
- traces:
Spring Boot -> OTLP -> OTel Collector -> Tempo - logs:
Spring Boot -> OTLP -> OTel Collector -> Loki - metrics:
Prometheuspulls/actuator/prometheus
flowchart LR
subgraph App[Spring Boot jwt-demo]
A1[HTTP metrics\nActuator /prometheus]
A2[Traces OTLP\nmanagement.opentelemetry.tracing.export.otlp.endpoint]
A3[Logs OTLP\nmanagement.opentelemetry.logging.export.otlp.endpoint]
end
subgraph Infra[Observability Infra]
C[OTel Collector]
T[Tempo]
L[Loki]
P[Prometheus]
G[Grafana]
end
A1 -->|pull /actuator/prometheus| P
A2 -->|OTLP traces| C
A3 -->|OTLP logs| C
C -->|traces| T
C -->|logs| L
P --> G
T --> G
L --> G
Recommended properties
management.opentelemetry.tracing.export.otlp.endpoint=${MANAGEMENT_OTLP_TRACING_ENDPOINT:http://localhost:4318/v1/traces}management.tracing.export.otlp.enabled=truemanagement.opentelemetry.logging.export.otlp.endpoint=${MANAGEMENT_OTLP_LOGGING_ENDPOINT:http://localhost:4318/v1/logs}management.logging.export.otlp.enabled=truemanagement.otlp.metrics.export.enabled=false(avoid duplicate metric ingestion with Prometheus scrape)
MIT (or any license you prefer).