Hey everyone! I've been experimenting with some node-level TCP optimizations in my Kubernetes clusters and believe I've found a few sysctl settings that could help improve throughput and reduce latency across various environments. Here are the four settings I'm considering:
1. net.ipv4.tcp_notsent_lowat=131072
2. net.ipv4.tcp_slow_start_after_idle=0
3. net.ipv4.tcp_rmem="4096 262144 33554432"
4. net.ipv4.tcp_wmem="4096 16384 33554432"
These suggestions are inspired by Cloudflare's detailed analysis on optimizing TCP for better performance:
https://blog.cloudflare.com/optimizing-tcp-for-high-throughput-and-low-latency/
I've seen positive results, but I'm really interested in your experiences too! If you try these settings in your setups (whether it's a homelab, development, or production), please share your results:
- Where are you deploying? (EKS/GKE/On-prem/OpenShift/etc.)
- What benefits did you notice? (Latency, Throughput, general stability?)
- Did you encounter any issues or drawbacks?
If a lot of people find these settings useful, perhaps we can work towards making them defaults in some Kubernetes setups. Thanks!
2 Answers
Did you run any benchmarks before suggesting these? It seems odd to recommend something without firsthand results or any specific data to back it up.
Benchmarks can vary a lot based on your specific environment. The values might need tweaking depending on your connection type and load. But yeah, it'd be great to see some concrete data!
If my clusters have high latency and low bandwidth, should I consider adjusting these values? Thinking about just reversing them, would that help?

It's concerning how harsh some comments can be. This forum could really use a bit more positivity. Everyone's just trying to share ideas, right?