This guide explains how to configure S3-compatible storage (AWS S3, MinIO, DigitalOcean Spaces, etc.) for persistent volumes in your Kubernetes deployment of the Liberu Control Panel.
- Overview
- Why Use S3 Storage?
- Supported S3 Services
- Prerequisites
- Installation Options
- Configuration Examples
- Using S3 with Helm
- Troubleshooting
- Best Practices
The Liberu Control Panel supports S3-compatible object storage for persistent volumes in Kubernetes deployments. This allows for better scalability, durability, and cross-node data access compared to traditional local or network-attached storage.
Benefits:
- Scalability: Automatically scales with your data without capacity planning
- Durability: Object storage typically offers 99.999999999% (11 nines) durability
- Availability: Data accessible from any Kubernetes node in the cluster
- Cost-Effective: Pay only for what you use with most cloud providers
- Multi-Region: Supports geographic distribution for disaster recovery
- Backup Integration: Easy integration with backup and archival systems
- Performance: Optimized for distributed applications
Use Cases:
- User-uploaded files and media
- Application logs and backups
- Static assets and public files
- Mail storage (when using mail services)
- DNS zone files (when using DNS cluster)
- Database persistent volumes (MariaDB, PostgreSQL)
- Database backups
The control panel supports any S3-compatible storage service:
- AWS S3 - Amazon Web Services object storage
- DigitalOcean Spaces - S3-compatible object storage
- Linode Object Storage - S3-compatible storage
- Wasabi - Hot cloud storage
- Backblaze B2 - Cloud storage with S3 API
- Cloudflare R2 - Zero-egress object storage
- MinIO - High-performance, Kubernetes-native object storage
- Ceph - Distributed storage with S3 gateway
- SeaweedFS - Distributed object storage
Before configuring S3 storage, ensure you have:
- S3 Bucket: Create a bucket in your S3 service
- Access Credentials: Obtain access key ID and secret access key
- Endpoint URL: Get the endpoint URL for your S3 service
- Region: Know the region where your bucket is located
- Kubernetes Cluster: Running Kubernetes cluster (installed via
install-k8s.sh) - S3 CSI Driver (for database storage): For using S3 with databases like MariaDB, you'll need:
- A CSI driver that supports block storage over S3 (e.g., MinIO DirectPV, s3fs-fuse)
- Or a storage solution that provides S3-compatible block volumes
- Note: Standard S3 object storage works well for application files, but databases may require block storage emulation
The easiest way to configure S3 storage is during the initial installation using the install-control-panel.sh script:
# Run the installation script
./install-control-panel.sh
# When prompted, choose to configure S3 storage
# The script will ask for:
# - S3 endpoint URL
# - Access key
# - Secret key
# - Bucket name
# - RegionThe script will automatically:
- Configure the Helm chart with S3 credentials
- Set up environment variables
- Create Kubernetes secrets
- Configure storage classes
For manual configuration or when updating an existing installation:
export S3_ENDPOINT="https://s3.amazonaws.com"
export S3_ACCESS_KEY="your-access-key"
export S3_SECRET_KEY="your-secret-key"
export S3_BUCKET="control-panel-storage"
export S3_REGION="us-east-1"helm upgrade --install control-panel ./helm/control-panel \
--namespace control-panel \
--set s3.enabled=true \
--set s3.endpoint="$S3_ENDPOINT" \
--set s3.accessKey="$S3_ACCESS_KEY" \
--set s3.secretKey="$S3_SECRET_KEY" \
--set s3.bucket="$S3_BUCKET" \
--set s3.region="$S3_REGION" \
--set persistence.storageClass="s3-storage"# Check if secrets were created
kubectl get secrets -n control-panel
# Verify pods are using S3 configuration
kubectl describe pod -n control-panel -l app.kubernetes.io/name=control-panels3:
enabled: true
endpoint: "https://s3.amazonaws.com"
accessKey: "AKIAIOSFODNN7EXAMPLE"
secretKey: "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"
bucket: "my-control-panel-bucket"
region: "us-east-1"
usePathStyle: falses3:
enabled: true
endpoint: "https://minio.example.com"
accessKey: "minioadmin"
secretKey: "minioadmin"
bucket: "control-panel"
region: "us-east-1"
usePathStyle: true # MinIO requires path-style URLss3:
enabled: true
endpoint: "https://nyc3.digitaloceanspaces.com"
accessKey: "DO00ABCDEFGHIJKLMNO"
secretKey: "ABC123xyz789example"
bucket: "my-space-name"
region: "nyc3"
usePathStyle: falses3:
enabled: true
endpoint: "https://s3.us-west-002.backblazeb2.com"
accessKey: "000abcd1234567890000000001"
secretKey: "K000xyz987654321abcdefghijklmnopqr"
bucket: "my-bucket-name"
region: "us-west-002"
usePathStyle: falses3:
enabled: true
endpoint: "https://<account-id>.r2.cloudflarestorage.com"
accessKey: "your-r2-access-key"
secretKey: "your-r2-secret-key"
bucket: "control-panel-storage"
region: "auto"
usePathStyle: falsehelm install mail-services ./helm/mail-services \
--namespace control-panel \
--set s3.enabled=true \
--set s3.endpoint="$S3_ENDPOINT" \
--set s3.accessKey="$S3_ACCESS_KEY" \
--set s3.secretKey="$S3_SECRET_KEY" \
--set s3.bucket="mail-storage" \
--set s3.region="$S3_REGION"helm install dns-cluster ./helm/dns-cluster \
--namespace control-panel \
--set s3.enabled=true \
--set s3.endpoint="$S3_ENDPOINT" \
--set s3.accessKey="$S3_ACCESS_KEY" \
--set s3.secretKey="$S3_SECRET_KEY" \
--set s3.bucket="dns-storage" \
--set s3.region="$S3_REGION"When using the automated installation script (install-control-panel.sh), MariaDB is automatically configured to use S3 storage if enabled. For manual installation:
helm install mariadb bitnami/mariadb \
--namespace control-panel \
--set auth.rootPassword="secure-password" \
--set auth.database=controlpanel \
--set auth.username=controlpanel \
--set auth.password="secure-password" \
--set primary.persistence.enabled=true \
--set primary.persistence.size=20Gi \
--set primary.persistence.storageClass="s3-storage" \
--set architecture=replication \
--set secondary.replicaCount=2 \
--set secondary.persistence.storageClass="s3-storage" \
--set metrics.enabled=trueNote: For MariaDB with S3 storage, ensure your S3-compatible storage supports block storage mode or use a CSI driver that provides block device emulation over S3 (like MinIO DirectPV or S3FS-FUSE with appropriate configuration).
# Update values.yaml or use --set flags
helm upgrade control-panel ./helm/control-panel \
--namespace control-panel \
--reuse-values \
--set s3.enabled=true \
--set s3.endpoint="https://s3.amazonaws.com" \
--set s3.accessKey="NEW_ACCESS_KEY" \
--set s3.secretKey="NEW_SECRET_KEY"Symptom: Pods fail to start or show S3 connection errors
Solution:
# Verify endpoint is reachable from cluster
kubectl run -it --rm debug --image=curlimages/curl --restart=Never -- curl -v $S3_ENDPOINT
# Check network policies
kubectl get networkpolicies -n control-panelSymptom: 403 Forbidden or Access Denied errors
Solution:
- Verify access key and secret key are correct
- Check bucket permissions/policies
- Ensure the IAM user/role has required permissions:
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:GetObject", "s3:PutObject", "s3:DeleteObject", "s3:ListBucket" ], "Resource": [ "arn:aws:s3:::your-bucket-name", "arn:aws:s3:::your-bucket-name/*" ] } ] }
Symptom: Bucket not found or DNS resolution errors
Solution:
# For MinIO and some S3-compatible services, enable path-style
s3:
usePathStyle: trueSymptom: Certificate verification failed
Solution:
- Ensure endpoint URL uses HTTPS
- For self-signed certificates, you may need to configure trust
- For MinIO:
--set s3.endpoint="https://minio.example.com"
# Check pod logs
kubectl logs -n control-panel deployment/control-panel -c php-fpm
# View environment variables in pod
kubectl exec -n control-panel deployment/control-panel -c php-fpm -- env | grep AWS
# Test S3 access from pod
kubectl exec -it -n control-panel deployment/control-panel -c php-fpm -- php artisan tinker
# In tinker:
# Storage::disk('s3')->put('test.txt', 'Hello World');
# Storage::disk('s3')->get('test.txt');
# Describe secrets
kubectl describe secret control-panel-secrets -n control-panel- Use IAM Roles (AWS): Instead of access keys, use IAM roles for service accounts (IRSA)
- Rotate Credentials: Regularly rotate access keys and update secrets
- Least Privilege: Grant only necessary S3 permissions
- Encrypt at Rest: Enable server-side encryption on S3 bucket
- Use TLS/SSL: Always use HTTPS endpoints
- Secrets Management: Never commit credentials to version control
- Choose Nearby Region: Select an S3 region close to your Kubernetes cluster
- Use CDN: Configure CloudFront or similar CDN for public assets
- Implement Caching: Use Redis or similar for frequently accessed data
- Multipart Upload: Configure for large files (handled by Laravel automatically)
- Lifecycle Policies: Automatically transition or expire old files
- Storage Classes: Use appropriate storage class (Standard, IA, Glacier)
- Lifecycle Rules: Move old data to cheaper storage tiers
- Monitor Usage: Set up billing alerts and monitor storage metrics
- Delete Unused Data: Implement cleanup policies for temporary files
- Compression: Enable compression for text-based files
- Enable Versioning: Protect against accidental deletions
- Cross-Region Replication: For disaster recovery
- Monitoring: Set up CloudWatch or monitoring for your S3 service
- Backup Strategy: Regular backups even with S3 durability
- Test Restores: Regularly test data restoration procedures
Update your Laravel configuration in .env:
# Set S3 as default filesystem
FILESYSTEM_DISK=s3
# S3 credentials (automatically set by Helm chart)
AWS_ACCESS_KEY_ID=your-access-key
AWS_SECRET_ACCESS_KEY=your-secret-key
AWS_DEFAULT_REGION=us-east-1
AWS_BUCKET=your-bucket-name
AWS_ENDPOINT=https://s3.amazonaws.com
AWS_USE_PATH_STYLE_ENDPOINT=false
# Optional: Custom URL for public files
AWS_URL=https://cdn.example.comIf you're migrating from local storage to S3:
# Create backup of current storage
kubectl cp control-panel/control-panel-xxx:/var/www/html/storage ./storage-backuphelm upgrade control-panel ./helm/control-panel \
--namespace control-panel \
--reuse-values \
--set s3.enabled=true \
--set s3.endpoint="$S3_ENDPOINT" \
--set s3.accessKey="$S3_ACCESS_KEY" \
--set s3.secretKey="$S3_SECRET_KEY" \
--set s3.bucket="$S3_BUCKET" \
--set s3.region="$S3_REGION"# Upload existing files to S3
kubectl exec -n control-panel deployment/control-panel -c php-fpm -- \
php artisan storage:migrate-to-s3# Verify files are accessible
# Test application functionality
# Remove local storage after verification# Enable S3 metrics in CloudWatch
aws s3api put-bucket-metrics-configuration \
--bucket your-bucket-name \
--id EntireBucket \
--metrics-configuration Status=Enabled# Monitor pod resource usage
kubectl top pod -n control-panel
# View detailed metrics
kubectl get --raw /apis/metrics.k8s.io/v1beta1/namespaces/control-panel/podsFor additional help:
- Documentation: https://liberu.co.uk
- GitHub Issues: https://github.com/liberu-control-panel/control-panel-laravel/issues
- Community: GitHub Discussions