I'm working with an Azure infrastructure where I have a subnet (192.168.1.0/24) and two VMs (A and B) with IP addresses 192.168.1.10 and 192.168.1.11, respectively. I've set up an internal load balancer with these VMs in the backend pool and a frontend IP of 192.168.1.100. I recently added another VM (C) with the IP 192.168.1.20 in the same subnet. I want VM C to communicate with A and B through the load balancer, so I configured the routing table of C to direct traffic to 192.168.1.100. The issue is that traffic initiated from C doesn't seem to hit A or B, as I'm not seeing any packets during traffic capture. Is this a limitation of the Azure load balancer, or could I have missed a configuration step?
2 Answers
It sounds like you're on the right track, but the issue might be related to how you've set up the routing. For the internal load balancer to work correctly, your VMs should be reachable through the load balancer's IP (192.168.1.100). If VM C is bypassing the load balancer and directly communicating with A or B, that wouldn't work as intended. Also, ensure there's no conflict in your routing setup that would prevent traffic from going through the load balancer.
Just to clarify, the internal load balancer should forward traffic within the same subnet, but it's often necessary to configure a User Defined Route (UDR) to ensure that traffic flows through the load balancer. This is especially important for traffic that originates from the same subnet, which might otherwise be routed directly between the VMs without going through the load balancer.
Related Questions
Can't Load PhpMyadmin On After Server Update
Redirect www to non-www in Apache Conf
How To Check If Your SSL Cert Is SHA 1
Windows TrackPad Gestures