I've been having trouble with my Kubernetes cluster. Whenever I cordon and try to drain a node, it just doesn't evict the pods as expected. Instead, they seem to turn into zombie pods and stay on that node. I have three nodes in my setup, and they're all configured as control planes and worker nodes. Is there something I'm missing here, or is this behavior normal?
6 Answers
You could be dealing with an infrastructure component that's also getting evicted, leading to kubelet crashing. If you're using lower-end hardware like Raspberry Pi with an SD card, the etcd load could be causing these timeouts.
Make sure to check for finalizers on your pods; they might be holding things up during the drain process.
It might help to check what's failing to drain. A common issue is not having a PodDisruptionBudget (PDB) for some deployments, which can lead to drains getting stuck.
Wouldn't a misconfigured PDB cause this issue too?
That doesn't sound typical at all. Usually, regular pods should terminate successfully, leaving just daemonsets and static pods behind.
If you want to force a drain despite PDBs, try using `--disable-eviction`. It uses the delete API instead of the eviction API, so it won't honor PDBs. Here’s the command:
```bash
kubectl drain --delete-emptydir-data --disable-eviction --ignore-daemonsets
```
You can also add `--force` if needed, but be cautious with that.
Are any of your pods configured with a very long terminationGracePeriodSeconds? That might be delaying the evictions.

Assuming everything's set up correctly, you might want to look into that.