KubernetesIntermediatePar time: 6:00

The Phantom Rollout

The deployment rollout is frozen. One pod won't terminate. Duplicate notifications are going out.

The Scenario

A deployment update was triggered 15 minutes ago and immediately stalled. kubectl rollout status shows the rollout is waiting. One old pod is stuck in Terminating state and has been for 14 minutes. The new pods are Running and Ready but the rollout won't complete because maxUnavailable is set to 0. The root cause is a preStop hook that drains in-flight jobs by calling the job-queue service - but that service is currently returning 503. The hook never exits, the pod never terminates, and the rollout never advances.

What You'll Learn

1

How preStop hooks interact with rolling deployment progress

2

Why maxUnavailable: 0 makes stuck Terminating pods a complete deployment blocker

3

Using kubectl describe pod to read terminationGracePeriodSeconds and hook status

4

Designing resilient preStop hooks that degrade gracefully

Tools You'll Use

kubectlPod eventsRollout statusDeployment spec

Real-World Context

Stuck preStop hooks are a common cause of deployment freezes in microservice architectures. Any hook that calls a downstream dependency is one outage away from blocking your entire release pipeline.

Ready to debug this?

Free account required - sign up with GitHub or Google in 10 seconds

Play The Phantom Rollout