I'm part of the networking team at Azure, and we're currently analyzing customer needs regarding UDP traffic. While we have some overall statistics, I'm curious to know if customers are actively stress-testing UDP workloads or if they're experiencing high UDP traffic on their VMs. If you're using UDP, what performance metrics do you find most important, like throughput, latency, or packet loss percentage? Also, do you make any adjustments to UDP settings, such as the MTU on VM interfaces or UDP buffer sizes, for better performance, or do you mostly stick with the default settings?
3 Answers
If you have a straightforward server setup, TCP usually does the job just fine. But if you're aiming for a solid experience, especially with Azure Virtual Desktop (AVD), then using Shortpath over a managed network is critical since UDP plays a big role there.
I think most users don’t run heavy UDP workloads outside of things like DNS, but this might change with more adoption of HTTP/3. As that protocol gains traction, we're likely to see more UDP traffic popping up.
We definitely track metrics like throughput, packets per second (PPS), and packet loss, especially since we handle lots of VPN tunnels and overlays. It's also important to know how close we are to hitting any PPS or throughput limits for our instances. Just a side note, I really wish we had GRE support and jumbo frames to efficiently encapsulate traffic without losing 1500 inner packets!
Good to know about the current support for jumbo frames in intra-VNET topologies! That's a helpful feature for the kind of work we do.
Exactly! We have some clients using customized secure protocols over UDP for VPNs and even running streaming services, so it’s definitely growing.