How to Size the Control Plane for a Large Kubernetes Cluster

0
0
Asked By TechyTiger23 On

I'm managing a Kubernetes cluster that will have roughly 1,000 pods per node and an expected total of around 10,000 pods. I'm looking for advice on how to properly size the control plane, including the number of nodes, etcd resources, and API server replicas, to maintain good responsiveness and availability. Any tips or best practices?

3 Answers

Answered By PodGuru77 On

I thought Kubernetes generally recommends a max of 110 pods per node, even though that's not a strict limit. Are you planning to use a specific version of Kubernetes that supports higher pod densities? Or will this be on a cloud provider?

Answered By K8SWhisperer99 On

Everything depends on how busy your cluster gets! Are you using any monitoring tools, like Alloy, to track API server performance? If you’re running a lot of operators and have numerous events, you need to consider that. For reference, our biggest cluster has 13,000 pods across 70 nodes, and we're doing fine with 3 control planes (8 CPUs and 30GB RAM each). Just make sure to isolate etcd with separate disks.

Answered By DevOpsNinja42 On

It really varies based on your cluster's activity level. You should monitor your control planes closely to see if they'll need to be scaled further. Maybe try testing things out in a smaller dev environment first to see how your control planes hold up. Running a half-size setup could give you useful insights.

Related Questions

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.