How to Achieve Even Load Distribution Across K8s Nodes?

0
6
Asked By CuriousJellyfish77 On

Hey everyone! We've recently resized our Kubernetes infrastructure on Azure and I've noticed that the load across our nodes isn't evenly distributed. Some nodes are struggling with high memory usage, while others remain underutilized. I've been looking into affinity and anti-affinity rules, but I'm not sure what to focus on to get a better load balance. I'd love to hear your thoughts on ensuring even distribution of memory usage across all worker nodes. Currently, we're experiencing failures in pods when there's high memory pressure, which has got me thinking that scheduling might play a role in this. Any advice or suggestions would be greatly appreciated!

4 Answers

Answered By WittyPanda79 On

You really need to size your pods correctly! Ensure that every pod has both requests and limits set properly; it might be worth enforcing this via policy. The scheduler considers the requests to determine scheduling feasibility, so if you set limits higher than requests, it could cause issues if your nodes run out of memory. For optimal performance, consider making requests equal to limits and use Vertical Pod Autoscaler (VPA) to manage sizing instead of relying on memory bursting.

Answered By SmartSeaOtter22 On

First off, make sure the memory requests for your pods accurately reflect their actual memory usage. The scheduler uses these requests to determine where to place pods, so having incorrect values can definitely lead to uneven load distribution. If you have requests set too low compared to your pods' limits, it could lead to the situation you're experiencing. It's a good idea to double-check those configurations!

CuriousJellyfish77 -

Thanks for the tip! We usually set our limits to 1500Mi and requests to 800Mi, thinking that was appropriate. Is it correct to assume that the scheduler will act based on the 800Mi request?

Answered By ResourcefulHedgehog91 On

It's also good to note that what you're referring to as "load" is generally the memory requests. The AKS load balancer does a round robin which should help, but getting your scheduling right is key. Look into tools like cast.ai that specialize in optimizing Kubernetes workloads.

Answered By EagerBeaver42 On

If it helps, before adjusting anything on the pod sizes, make sure you're utilizing pod topology constraints effectively. Spreading your pods more evenly across nodes with a low max skew should significantly help with balancing the load without any major changes.

Related Questions

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.