I'm diving into Kubernetes and have set up a master node on my VPS and a worker node on an AWS EC2 instance. I'm encountering a problem where Calico is showing the worker node's private IP instead of its public IP. This issue is preventing my master node from SSHing into the worker node. Has anyone else experienced this? What adjustments can I make in Calico or the network setup to resolve this?
2 Answers
This is actually expected behavior. They’re not designed for SSH access. If something goes wrong, you just spin up a new instance instead. If you want to access services, you should use port forwarding with kubectl. What exactly are you trying to achieve? Calicoctl interacts with kubectl or the Kubernetes API, so you might need to rethink your approach.
I've faced a similar issue! It usually happens when Calico uses the Docker loopback interface instead of the public IP. You can fix this by patching the node with this command:
calicoctl patch node --patch='{"spec":{"bgp": {"ipv4Address": "/24"}}}'
Just make sure to replace '/24' with the correct subnet for your setup.
Related Questions
Can't Load PhpMyadmin On After Server Update
Redirect www to non-www in Apache Conf
How To Check If Your SSL Cert Is SHA 1
Windows TrackPad Gestures