
All pods stuck in Pending. Three nodes exist but none will accept the workload.
The worker-queue deployment scaled to 3 replicas and all three pods are stuck Pending. kubectl get nodes shows three nodes in Ready state. The job queue is growing fast and SLA timers are ticking. Each node has a different scheduling blocker: one has a NoSchedule taint for GPU workloads, one has a hard pod anti-affinity rule preventing co-location, and one has all allocatable CPU requests exhausted by existing pods. The scheduler has nowhere to place any of the three new pods.
Reading scheduler events in kubectl describe pod to identify the blocking constraint
Distinguishing between taint-based, affinity-based, and resource-based scheduling failures
How CPU requests vs CPU limits affect scheduling even when nodes have idle capacity
Strategies to fix each scheduling blocker without affecting other workloads
Pods stuck Pending with no obvious error is one of the most common K8s support questions. The scheduler silently rejects placements - you have to read events to find the real reason, and it's often multiple reasons at once.
Free account required - sign up with GitHub or Google in 10 seconds
Play The Pods That Won't Land