Is a 20-30 ms Slowdown Normal After Migrating to Kubernetes?

0
19
Asked By SunnyDay42 On

Hey everyone! I recently migrated my projects to Kubernetes, consolidating around 20 APIs built with API Platform (PHP). While everything seems functional, I've noticed that each API's response time has slowed down by about 20-30 ms. Before this transition, I was using a load balancer in front of two VPS servers running Docker containers. Now, on Kubernetes, the node sizes are identical to my previous VPS setup, and I've kept the container and API configurations the same. I've attempted a few optimizations, like not setting CPU limits and enabling keep-alive for both the load balancer and my NGINX Ingress Controller. I even tested using `hostNetwork: true`, but to no avail. I'm curious if this slowdown is expected due to Kubernetes overhead or if there's something wrong with my configuration. Any advice or suggestions? Thanks!

4 Answers

Answered By DataGeek11 On

Kubernetes networking behaves differently, and that 20-30 ms delay could be normal considering all the routing hops. You mentioned not having observability tools set up; that could really help you pinpoint the issue. If you’re seeing issues in a high-throughput scenario, it might be worth investigating further, but for most cases, this isn't too alarming.

FellowTechie -

Exactly! Observability would give you more insight into where the bottleneck might be. It helps in fine-tuning your setup.

Answered By TechSavant88 On

It could be a number of factors. How's your Service configured in Kubernetes? By default, the load balancer uses a round-robin strategy, which means it might route requests to pods on different nodes. This could introduce some latency. Also, be mindful of your Container Network Interface (CNI) settings and consider setting requests for CPU even if limits aren't necessary, just to ensure resource allocation is smooth.

CuriousDev -

Good point! I’d also recommend checking the externalTrafficPolicy settings for your ingress controller. That could help manage how traffic is routed too.

Answered By KubeMaster42 On

Don't stress too much about the slowdown—20-30 ms isn't too severe unless you're hitting extremely high performance demands. Just ensure your configuration is solid, and keep an eye on your downstream service latency as well. Sometimes that can introduce significative delay too!

NewbieDev -

That was my thought too! Though I guess it’s good to be aware of potential issues, it’s not always a huge concern.

Answered By CloudWizard92 On

Going from direct networking to Kubernetes networking can definitely add overhead. The routing between the ingress, node ports, and the pods could be causing the delay. You might want to look into setting the externalTrafficPolicy to Local to see if that helps with latency.

AvidCoder -

I had a similar issue, and changing traffic policies helped a lot. It keeps load balancer health checks from pinging nodes not running your service.

Related Questions

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.