I'm curious about why AWS has set the pod limit for EC2 instances to 110. Is there a specific reason behind choosing this number, or is it just a random figure?
5 Answers
And don’t worry, if you're using larger instance types, you can always override that default limit. Big nodes often come with better capabilities, even if the default limits are frustratingly low!
The 110 pod limit is actually based on Kubernetes recommendations. AWS modifies this limit automatically according to the EC2 instance size when you're using EKS. So, it’s not just a random choice—there's some logic to it!
From what I've seen, that limit might not make perfect sense either. For example, certain instance types like c7a.large only allow up to 29 IPs. Setting max pods to 110 means that once you hit 30 pods, you're in trouble with those IP errors. It complicates autoscaling too!
Honestly, the reasoning behind it might not be as deep as you'd think. The number likely just comes from some default settings from older Docker versions where they decided to round it up to the nearest power of 2 and double it. It ends up leading to that 110 limit, which isn’t necessarily ideal.
Exactly! Nodes typically get a Class C subnet by default, which can lead to situations where you might end up with both new and old pods running at the same time during updates. This could technically let you have 220 pods consuming IPs, but good luck managing that!

But keep in mind, while that's the recommendation, the number of available IP addresses on the node is the factor that really matters. If you set maxPods higher than the IPs available, you could face those annoying out of IP issues.