Hey everyone! I'm a senior DevOps engineer with a background in backend development. I've been wondering how the Kubernetes community manages evicted pods in their clusters. I was thinking about setting up a Kubernetes cronjob to handle the cleanup, but I'm curious about your experiences and any best practices you recommend. Just as an update, we're running Kubernetes 1.32 on AWS EKS. Thanks in advance for your insights!
4 Answers
So, why bother worrying about these evicted pods? There's no actual code running in them, just a marker saying they were culled for some reason. They generally get cleaned up eventually. But if it’s bugging you, just set up a quick cronjob to remove them using kubectl.
I use a tool called descheduler to manage my evicted pods. It’s pretty handy, and if you configure it right, it can help maintain a healthier cluster too. You might want to check it out!
Yeah, descheduler works great! Just make sure to set up the configuration properly.
The reason for keeping evicted pods is mainly for debugging. If they pile up, it might be a sign that there's something wrong in your setup. If you're capturing logs properly, you can clear them up once you’re sure you have that data backed up. And keep an eye on your workload to prevent these evictions in the first place!
That’s a smart point! I’ll definitely consider logging better before I start clearing things.
Kubernetes generally does some automatic cleanup for evicted pods, but it can sometimes leave them hanging around. If you’re looking for a way to handle this, implementing a cronjob sounds like a solid plan! It’s what I used to do when I noticed evicted pods after deployments. Just keep in mind, check your memory limits to prevent future evictions!
Totally agree with that! I’ve set up similar cronjobs in the past and it kept my cluster cleaner.

Exactly! Just a little cleanup task when you see fit.