I've been working with Kubernetes and Jenkins for our CI pipeline, and I've noticed something concerning. We have a service that runs on four pods, and they are consistently using their memory at maximum capacity, often marked in red on the Kubernetes resource dashboard. I'm trying to figure out why this is happening and how to address it. Can anyone offer guidance on how to identify the root cause of this memory usage issue?
3 Answers
First off, you need to find out which service is running on those pods. You can exec into the pod and check what’s eating up all that RAM. Also, if you have the option, try running that service locally to see how it behaves outside the cluster.
If your service is Java-based, it's worth looking into the JVM settings. Are there any custom java_opts configured? The memory settings could be impacting how much memory is allocated. Knowing your JVM configurations might help clarify why the pods are maxing out their memory limits.
Just to give some extra context, I see that each of your pods has a minimum memory of 1GiB, capped at 1.5GiB. Even if the traffic is low, that memory usage could still hit the limit. And since the CPU utilization is low, it seems like the memory issue is specific to how that service is configured.

Related Questions
Can't Load PhpMyadmin On After Server Update
Redirect www to non-www in Apache Conf
How To Check If Your SSL Cert Is SHA 1
Windows TrackPad Gestures