I'm experiencing an issue in my Kubernetes cluster where pods on a worker node stay stuck in the Terminating state when that node turns NotReady. Instead of Kubernetes automatically terminating and relocating these pods to other healthy nodes, I find myself having to manually force-delete them or sometimes even recreate the node to restore normal operations. This is particularly frustrating because I thought Kubernetes was designed to automatically handle node failures by evicting and rescheduling pods. Has anyone else dealt with this issue?
5 Answers
Yeah, definitely look into the finalizers. Whenever I've faced similar problems, that was usually the culprit.
What’s causing the node to go NotReady? Is it just a temporary state where you can still access the node, or is it completely unresponsive? Kubernetes usually moves workloads to other nodes if the node is unresponsive for about 5 minutes by default.
A pod might get stuck in Terminating if the node goes offline and doesn’t confirm to the apiserver that the pod has stopped. Also, check the pod’s controller. If it’s managed by a Deployment, make sure its strategy isn’t set to “Recreate.” For StatefulSets, Kubernetes waits for termination confirmation before launching a new pod.
You might want to check the finalizers on the pods. Sometimes a storage provisioner could be causing them to hang in the Terminating state.
I faced this issue too. In my case, finalizers weren't the problem; it turned out one of my deployments was exhausting resources on the node, causing it to go down. That's when the pods became stuck at Terminating, and I saw a NotReady label on the node.

Related Questions
Can't Load PhpMyadmin On After Server Update
Redirect www to non-www in Apache Conf
How To Check If Your SSL Cert Is SHA 1
Windows TrackPad Gestures