I'm having a tough time removing a label from my Kubernetes node. I accidentally marked one of my nodes as a worker when I actually intended it to be just for Etcd and the control plane. I followed the instructions and ran the command `kubectl label node node1 node-role.kubernetes.io/worker-`, which indicated that the label was removed. However, when I check the nodes, it still shows the worker label. This is so frustrating! Why do things like this happen in the Kubernetes ecosystem? It feels like every small task becomes a battle—like when you try to create a file and it just won't appear! Is it just me?
3 Answers
Looks like Rancher might automatically manage that label. Check if it's part of a node pool; Rancher expects certain labels for nodes in pools. If it's labeled as 'Not in a Pool', you might still see inconsistencies across different views in your cluster management.
It sounds like something might be putting that label back automatically, probably Rancher in this case. It's worth checking to see if you have any configurations messing with it.
Have you checked the audit logs? That could help you figure out which component is reapplying the label after you've removed it. There's a guide on the Kubernetes site for auditing your cluster.
Yeah, in some areas the roles are right, but in others it lists it as 'ALL', like it's a worker too.