I'm curious about why upgrading AWS EKS necessitates a restart of all pods. Is it due to worker node changes or something else? It would be great to get some clarity on this process and any best practices to minimize downtime.
4 Answers
Make sure you clarify what kind of upgrade you're talking about. Upgrading the control plane shouldn't affect your running pods, but when you upgrade node groups, that will cause the pods to be dropped as the nodes are replaced.
When you upgrade AWS EKS, you're essentially replacing the worker nodes, which involves shutting down those hosts. This means the pods running on those nodes need to be recreated elsewhere. As long as you have proper redundancy and replication in place, the transition should be smooth and mostly unnoticed by users. If it isn't, you might need to reevaluate your setup of replicas and pod distribution.
It's all about getting a new host. Technically, if you were using bare metal, you could upgrade the kubelet without affecting running containers, but that's not recommended and goes against best practices.
Honestly, it sounds like you might need to brush up on your Kubernetes knowledge. The behavior you're seeing is part of how Kubernetes manages pods and nodes. The more you learn, the more this will make sense, especially for production environments!
Absolutely! It’s surprising how many organizations don’t fully grasp how Kubernetes operates. A pod getting removed or a node being updated shouldn't significantly affect your application if everything is configured correctly.