Skip to content

feat: add install psa config#1168

Draft
ihcsim wants to merge 1 commit intoharvester:masterfrom
ihcsim:install-psa
Draft

feat: add install psa config#1168
ihcsim wants to merge 1 commit intoharvester:masterfrom
ihcsim:install-psa

Conversation

@ihcsim
Copy link
Copy Markdown
Contributor

@ihcsim ihcsim commented Oct 23, 2025

Problem:

Currently, Harvester doesn't provide any visibility into the security context of the workloads that are admitted into non-system namespaces. This leads to a security gap where unauthorized privileged workloads may be run in these namespaces.

Solution:

Provide an installation configuration to enable pod security admission and specify the default enforcement standard. With this change, the K8s API server is updated to read the Harvester cluster-wide PSA configuration at /etc/rancher/rke2/config.yaml.d/99-harvester-psa.yaml. The configurable security enforcement level and all exempted system namespaces are defined in this manifest.

The K8s audit policy is also updated to log and report admission violation incidents.

The following tasks will be done in separate PRs:

  • Introduce GUI fields to the installer to override the current hard-coded default configuration
  • Introduce these changes to the upgrade path

Related Issue(s):

harvester/harvester#8196

Test plan:

Test Case 1 - Violation Warnings Are Recorded In Audit Logs

Install Harvester using an installer containing this enhancement.

Once Harvester is ready,

  • Expect the K8s API server to be reading the PSA configuration from /etc/rancher/rke2/config.yaml.d/99-harvester-psa.yaml:
k -n kube-system get po kube-apiserver-isim-dev3 -ojsonpath='{.spec.containers[0].args[0]}'
--admission-control-config-file=/etc/rancher/rke2/config.yaml.d/99-harvester-psa.yaml%
  • On the management node, expect the PSA configuration to be available at the above specified host path:
sudo less /etc/rancher/rke2/config.yaml.d/99-harvester-psa.yaml
  • Create a demo namespace:
k create ns demo
  • Create a pod with hostPath and hostNetwork properties:
cat <<EOF | kubectl -n demo apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-host
  labels:
    app: nginx-host
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx-host
  template:
    metadata:
      labels:
        app: nginx-host
    spec:
      hostNetwork: true
      containers:
        - name: nginx
          image: nginx:latest
          volumeMounts:
            - mountPath: /tmp
              name: host-vol
      volumes:
        - name: host-vol
          hostPath:
            path: /tmp/
            type: Directory
EOF
Warning: would violate PodSecurity "baseline:latest": host namespaces (hostNetwork=true), hostPath volumes (volume "host-vol")
deployment.apps/nginx-host created
  • On the management node, expect the violation to be recorded in the audit logs:
sudo cat /var/lib/rancher/rke2/server/logs/audit.log | grep -i violation | jq .annotations
{
  "authorization.k8s.io/decision": "allow",
  "authorization.k8s.io/reason": "RBAC: allowed by ClusterRoleBinding \"system:controller:replicaset-controller\" of ClusterRole \"system:controller:replicaset-controller\" to ServiceAccount \"replicaset-controller/kube-system\"",
  "mutation.webhook.admission.k8s.io/round_0_index_2": "{\"configuration\":\"harvester-mutator\",\"webhook\":\"mutator.harvesterhci.io\",\"mutated\":false}",
  "pod-security.kubernetes.io/audit-violations": "would violate PodSecurity \"baseline:latest\": host namespaces (hostNetwork=true), hostPath volumes (volume \"host-vol\")",
  "pod-security.kubernetes.io/enforce-policy": "privileged:latest"
}

Note that with the current privileged default configuration, the pod is still permitted to run:

$ k -n demo get po
NAME                              READY   STATUS    RESTARTS   AGE
nginx-host-new-597d5cc766-67ntk   1/1     Running   0          17s
Test Case 2 - More restrictive PSA Configuration In User Namespace

Create a more restrictive namespace to enforce baseline security level:

k create ns demo-baseline
k label ns demo-baseline pod-security.kubernetes.io/enforce=baseline
  • Try to create the same nginx-host deployment from test case 1

  • Expect the pods to be prohibited from running even though its deployment and replicaset are created:

$ k -n demo get po
NAME                              READY   STATUS    RESTARTS   AGE
nginx-host-new-597d5cc766-67ntk   1/1     Running   0          17s
$ k -n demo-baseline get deploy,rs,po
NAME                         READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/nginx-host   0/1     0            0           33s

NAME                                    DESIRED   CURRENT   READY   AGE
replicaset.apps/nginx-host-6f8c55bfc4   1         0         0       33s

Additional documentation or context

Signed-off-by: Ivan Sim <ivan.sim@suse.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant