I set up a new node pool that exclusively features high-performance "fast" nodes, and currently, this pool is only being utilized by one specific deployment. Right now, Karpenter is creating just one node to house all the replicas of my deployment, which is cost-effective, but I'd prefer to distribute these pods across multiple nodes for better resilience.
I've tried using pod anti-affinity to ensure that no two pods from the same replicaset are on the same node, but I'm not sure that's enough. I've also considered topology spread constraints, but my understanding is that if Karpenter opts to create only one node, all pods will end up on that node.
One workaround I thought about is limiting the size of the nodes in the pool in combination with topology spread constraints. By sizing the nodes so they only fit the desired number of pods, I could theoretically compel Karpenter to start multiple nodes. However, this seems a bit hacky and could hinder me from scaling up if my Horizontal Pod Autoscaler (HPA) needs to kick in. Am I overlooking any potential solutions?
2 Answers
Have you thought about switching from preferredDuringScheduling to requiredDuringScheduling? The latter ensures that pods are allocated to different nodes as required, which might help you achieve your goal of spreading pods out more effectively.
Definitely look into topology spread constraints. They can help, but as you mentioned, Karpenter's behavior might prevent multiple nodes from spinning up if it can fit everything on one node. You might want to experiment with different configurations to find a sweet spot that balances efficiency and pod distribution without compromising scaling.
I tried that, but you're right. Unless you can trick it into using multiple nodes by fitting a specific number of pods per node, you might hit that same limitation.

That's a good point, but keep in mind that this will only enforce that each pod lives on a separate node, not really address the overall node count. You’ll still want to manage the number of nodes carefully!