I'm setting up a private Azure Kubernetes Service (AKS) cluster, and while everything seems to be configured correctly, I'm stuck with a problem. The AKS cluster is registering properly in the private DNS zone, and the default node pool is created in the right subnet. I can even ping the API hostname and see the Private Endpoint's IP.
However, I'm facing a major issue because the worker nodes aren't able to connect to the cluster through the Private Endpoint. They can communicate with each other, but the Private Endpoint does not respond to pings or HTTPS requests, despite being in the same subnet as the worker nodes.
I've tried creating the AKS cluster both with Terraform and with Azure CLI using the script provided. We've consulted with Microsoft support, but no solution has emerged so far. What should I check next or what might I have overlooked?
2 Answers
Have you checked the Network Security Group (NSG) settings? Make sure that required outbound rules for the nodes are configured correctly on your firewall. Sometimes, traffic restrictions can block the nodes from accessing the API endpoint.
Is your Private DNS zone properly linked to the VNet where the nodes are located? Even if DNS resolution works in a centralized setup, the AKS private DNS zone needs to be connected to the VNet with the AKS nodes to successfully connect to the Kubernetes API.
Related Questions
Can't Load PhpMyadmin On After Server Update
Redirect www to non-www in Apache Conf
How To Check If Your SSL Cert Is SHA 1
Windows TrackPad Gestures