Hey folks! I'm a senior DevOps engineer with a backend development background, and I'm curious about how to manage evicted pods in my Kubernetes cluster. I'm considering setting up a cron job for cleanup but want to hear how the community usually handles this kind of situation. We're currently using AWS EKS with Kubernetes version 1.32. Thanks in advance for your insights!
5 Answers
Kubernetes usually manages evicted pods automatically, so I'm curious why you'd need to clean them up. Are you seeing specific issues?
Unless you've changed some flags, you might find yourself waiting a while for Kubernetes to clean them up. I often have to do a manual clean-up before garbage collection kicks in.
Seeing a lot of evicted pods might indicate a deeper issue. I’d suggest examining your workloads to prevent this problem rather than just applying a quick fix. Also, see if your garbage collection can be adjusted to run sooner.
Fortunately, these aren't a big issue for us. Just annoying seeing them in the pod list.
I've been using the descheduler for this issue. You can find it at github.com/kubernetes-sigs/descheduler. It's been working great for me!
Yes, I use it too! Make sure to configure it properly for your cluster.
Usually, those evicted pods are left for troubleshooting. If you have event logs captured, you can clean them up later. Ideally, you shouldn’t have evictions in a healthy cluster.
That's a good point. I'm planning to look into capturing those logs before any cleanup.
What’s the plan with your nodes? If you’re just checking and seeing evicted pods, make sure you check your finalizers — they may need to be adjusted to allow pods to get cleaned up.
We aren’t trying to evict pods deliberately, just noticing them lingering in the cluster. I was curious about the best practices for managing them.

I remember earlier versions of k8s leaving evicted pods around, especially when Prometheus was using too much memory. I used to manually delete them, but now I ensure I set limits to avoid that.