I'm managing a Kubernetes cluster that will have roughly 1,000 pods per node and an expected total of around 10,000 pods. I'm looking for advice on how to properly size the control plane, including the number of nodes, etcd resources, and API server replicas, to maintain good responsiveness and availability. Any tips or best practices?
3 Answers
I thought Kubernetes generally recommends a max of 110 pods per node, even though that's not a strict limit. Are you planning to use a specific version of Kubernetes that supports higher pod densities? Or will this be on a cloud provider?
Everything depends on how busy your cluster gets! Are you using any monitoring tools, like Alloy, to track API server performance? If you’re running a lot of operators and have numerous events, you need to consider that. For reference, our biggest cluster has 13,000 pods across 70 nodes, and we're doing fine with 3 control planes (8 CPUs and 30GB RAM each). Just make sure to isolate etcd with separate disks.
It really varies based on your cluster's activity level. You should monitor your control planes closely to see if they'll need to be scaled further. Maybe try testing things out in a smaller dev environment first to see how your control planes hold up. Running a half-size setup could give you useful insights.
Related Questions
Can't Load PhpMyadmin On After Server Update
Redirect www to non-www in Apache Conf
How To Check If Your SSL Cert Is SHA 1
Windows TrackPad Gestures