I'm having trouble with a hub and spoke setup in Azure. I have a VM in VNet 1 that can't ping a server, but another VM in VNet 2 can. I've checked the peering settings to ensure everything is correct. Here's where it gets tricky: our hub VNet is connected via ExpressRoute to my parent company's ExpressRoute in a different tenant (with no visibility on my end). From there, the traffic goes through a firewall to DataCenter B, which has a site-to-site VPN to another firewall at DataCenter A where the server resides.
We had our network specialist modify BGP peer advertising on the firewalls connecting the two data centers, but I'm still unable to ping the server from the VM in VNet 1. I can see the tracert logs for both VMs, but the non-working VM isn't reaching our switch at DataCenter B. It's not just ICMP that's an issue either; all traffic is affected, as demonstrated by running psping from Sysinternals. Can anyone offer insights on what I might be missing? I've already reached out to Microsoft support three times and haven't found a solution yet.
1 Answer
Have you confirmed that ICMP is allowed through your firewall for the subnet or IP? If the routers can ping each other and the switches on the tunnel address, then the issue might be with the server or its DNS settings. I suggest setting up a service to test actual traffic flow and monitor netflows and packet captures to determine where the breakdown occurs if ICMP isn’t properly allowed for both the source and destination IPs.

Wow, you really dive deep into the packet details! I heard that fixing the BGP advertising of the new subnets was essential too. Could the issue be on the parent tenant side? Are you well-versed in Azure networking?