Skip to content

[perf] [raycluster] Add cache selector to limit Pod caching to KubeRay-managed Pods only #4625

@kingcr7

Description

@kingcr7

Description:
Currently, the KubeRay operator's informer cache watches and caches all Pod resources in the cluster (when watching all namespaces), even though KubeRay only needs to manage Pods that are created by itself (labeled with app.kubernetes.io/created-by=kuberay-operator).This leads to unnecessary memory consumption, especially in large-scale clusters with thousands of Pods.

Proposed Solution:
Add a label selector for corev1.Pod in the cacheSelectors() function to limit the informer cache to only Pods managed by KubeRay:

func cacheSelectors() (map[client.Object]cache.ByObject, error) {
	label, err := labels.NewRequirement(utils.KubernetesCreatedByLabelKey, selection.Equals, []string{utils.ComponentName})
	if err != nil {
		return nil, err
	}
	selector := labels.NewSelector().Add(*label)

	return map[client.Object]cache.ByObject{
		&batchv1.Job{}: {Label: selector},
		&corev1.Pod{}:  {Label: selector},  // ← Add this line
	}, nil
}

Expected Benefits:

  1. Reduced memory footprint: The operator's informer cache will only store Pods created by KubeRay, significantly reducing memory usage in large clusters
  2. Fewer watch events: Reduced load on the Kubernetes API server and etcd
  3. Improved scalability: Better performance when running in clusters with many namespaces and Pods

Related Resources:

  1. Similar optimization already implemented for batchv1.Job resources
  2. KubeRay labels all Pods it creates with app.kubernetes.io/created-by=kuberay-operator (defined in controllers/ray/common/pod.go)

Metadata

Metadata

Assignees

Labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions