I'm new to Kubernetes and currently working on a project involving multiple pods within a namespace, including an ingress, Grafana, InfluxDB, Telegraf, and a UDP collector. I've set up UDP service configurations for the collector and access everything through an ingress configured as a LoadBalancer. While it works fine with low traffic, I'm trying to enable the cluster to handle larger UDP traffic, around 15,000 messages per minute. However, the ingress controller restarts due to exceeding the default number of 'worker_connections' when I push this much traffic. I've attempted to scale the pods but noticed that with too many active pods (I tried adding up to 10), the message delivery drops significantly, whereas just one pod manages to receive most of the messages. I'm looking for advice on how to scale effectively and what improvements I should make for a stable solution.
1 Answer
It’s important to note that ingress is primarily designed for layer 7 (HTTP) traffic, which is not ideal for UDP (layer 4 traffic). Instead of using ingress for your UDP traffic, I recommend switching to a LoadBalancer service directly without involving ingress. This will allow you to handle high-volume traffic much more effectively and improve your chances of receiving all messages during spikes.
I see your point, but I also access Grafana through the ingress (which is HTTP). What would be the best way to set up this configuration without losing that access?