I've got this massive node that I'd love to utilize across several Kubernetes clusters. So far, my research hasn't shown a solid way to do this, and I've even come across recommendations against it. Why is that the case? This seems like a pretty common scenario—what alternatives do I have?
5 Answers
Sharing a node across multiple Kubernetes clusters sounds like a good idea, but it actually tends to lead to a bunch of conflicts and security issues. Kubernetes expects full ownership of its nodes. Instead, you might want to look into using namespaces for separating different workloads or consider setting up isolated clusters with solutions like KubeVirt or Harvester.
It's not a typical use case you’d see. Kubernetes is generally designed for a whole machine to be managed by a single Kubelet. If you needed to break it up, creating VMs would be one way to go about it, allowing you to manage the OS in segments.
This is quite an interesting scenario! Technically, you could run multiple instances of Kubelet on the same node, each configured for a different cluster, but it's complicated. What's your specific use case?
If you’ve got a node in Kubernetes already, you can use KubeVirt to launch VMs and connect them to other nodes. Another option is to look into launching Kubelets in pods, which is something used by cluster APIs with nested providers.
Not really a common setup! Typically, if you're running Kubernetes, you won't have several separate clusters needing to access the same node simultaneously. Maybe consider setting up multiple VMs with PCI passthrough? Only activate the VM that needs access when required.
I'm trying to share this GPU that's on my huge node with different clusters running Kubeflow and NVFlare, but only when necessary.