Kubernetes has become the go-to platform for running not just long-lived services, but also batch workloads like data processing, ETL pipelines, machine learning training, CI/CD pipelines, and scientific simulations. These workloads typically rely on the Job API, which ensures that a specified number of Pods run to completion.
Until now, Kubernetes has had limited flexibility when a Job’s Pod failed or was evicted. Pod replacement behavior was often unpredictable: would the replacement Pod get scheduled on the same node? a nearby node? or anywhere in the cluster?
With Kubernetes v1.34, a new feature lands: Pod Replacement Policy for Jobs, driven by KEP-3015. This allows users to explicitly control how replacement Pods are scheduled, improving reliability, performance, and efficiency of batch workloads.
Why Pod Replacement Matters
When a Pod belonging to a Job fails (e.g., due to node drain, eviction, OOM, or hardware issue), Kubernetes creates a replacement Pod. However:
- The replacement may land anywhere in the cluster.
- If the Pod had local data (e.g., cached dataset, scratch disk, node-local SSD), the replacement Pod may not find it.
- If the Pod had NUMA or GPU locality, the replacement might end up with suboptimal hardware.
- In multi-zone clusters, scheduling a replacement Pod across zones could increase latency and cross-zone costs.
For workloads that depend on node affinity or cached state, this can be a real problem.
Current behavior:
By default, Kubernetes’ controller replaces pods as soon as they start terminating, which can lead to multiple pods running for the same task at the same time, especially in indexed Jobs. This can result in issues with workloads that require exactly one Pod per task, such as certain machine learning frameworks.
Starting replacement pods before old pods are terminated fully can cause other problems like extra cluster resources being used for running replacement pods.
Feature: Pod Replacement Policy feature
This feature, Kubernetes jobs will have two pod replacement policies to choose from:
-
TerminatingOrFailed (default): will create a replacement Pod as soon as the old one starts terminating.
-
Failed: waits until the old Pod is fully terminated and reaches the Failed state before creating a new one pod
Using policy: Failed ensures that only one Pod runs for a task at a time
:::info
Quick Demo: We will try to demo Pod Replacement Policy for Jobs feature for both scenarios
:::
SCENARIO 1: default behavior TerminatingOrFailed: demo steps.
- setup local kubernetes cluster (with minkube)
brew install minikube
# start local cluster
minikube start --kubernetes-version=v1.34.0

# verify cluster is running
kubectl get nodes
# verify kubernetes version: v1.34.0

- define k8s job config with podReplacementPolicy: TerminatingOrFailed, apply job & monitor pods
# worker-job.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: worker-job
spec:
completions: 2
parallelism: 1
podReplacementPolicy: TerminatingOrFailed
template:
spec:
restartPolicy: Never
containers:
- name: worker
image: busybox
command: ["sh", "-c", "echo Running; sleep 30"]
kubectl apply -f worker-job.yaml
# monitor pods are running
kubectl get pods -l job-name=worker-job

- delete job pod manually and observe behavior
# delete pods associated with job:worker-job
kubectl delete pod -l job-name=worker-job

scenario 2 : Delayed Replacement with Failed Policy: demo steps
- define k8s job config with podReplacementPolicy: Failed, apply job & monitor pods
# worker-job-failed.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: worker-job-failed
spec:
completions: 2
parallelism: 1
podReplacementPolicy: Failed
template:
spec:
restartPolicy: Never
containers:
- name: worker
image: busybox
command: ["sh", "-c", "echo Running; sleep 1000"]
# monitor pods are running
kubectl get pods -l job-name=worker-job-failed

- delete job pod manually and observe behavior
# delete pods associated with job:worker-job-failed
kubectl delete pod -l job-name=worker-job-failed

behavior: replacement pod:worker-job-failed-q98qx is created only after the old pod:worker-job-failed-sg42q fully terminates, there is no overlap between old and new pod.
Benefits
- Improved Reliability: Jobs are now self-healing. A single pod failure no longer risks halting an entire workload. This makes Kubernetes jobs more trustworthy for critical processes.
- Reduced Operational Burden: Previously, operators often had to monitor jobs manually or write custom controllers/scripts to handle pod replacement. With this built-in capability, operational overhead is significantly reduced.
- Efficient Resource Utilization: Failed pods that linger without progress waste CPU and memory. Automatic replacement ensures resources are recycled effectively.
Better User Experience: For developers, running jobs becomes less error-prone. Teams can focus on business logic instead of constantly monitoring for pod failures.
Best Practices
- Tune restart policies: Use Never or OnFailure appropriately depending on workload characteristics.
- Monitor metrics: Use Prometheus/Grafana to track pod replacement events.
- Set resource requests/limits: Prevent unnecessary failures by properly sizing pods.
- Validate thresholds: Ensure replacement policies are configured to avoid endless restart loops.
- Test in staging: Before deploying to production, simulate pod failures in a staging cluster to observe replacement behavior.
Use Cases
- Machine Learning Workloads: Training models can take hours or days, and pod failures are inevitable. Automatic replacement ensures training jobs continue without manual restarts, making ML pipelines more resilient.
- Data Pipelines: ETL jobs or distributed data processing tasks often involve multiple pods running in parallel. Replacing failed pods ensures the pipeline completes successfully without operator intervention.
Takeaways
Pod replacement policy gives control over Pod creation timing to avoid overlaps, optimizes cluster resources by preventing temporary extra pods,and offers flexibility to choose the right policy for your job workloads based on your requirements and resource constraints
Reference(s)
- https://kubernetes.io/blog/2025/08/27/kubernetes-v1-34-release/ n
