I've heard that Kubernetes can run in a variety of places, from bare metal to the cloud, but I'm curious about workloads. Can you actually set up a single cluster with the master nodes on-premises while having worker nodes in AWS, GCP, or other clouds? Is that even feasible, or would it just lead to latency issues? Would it be better to manage separate clusters instead of mixing the environments? I'm really trying to look beyond the buzzwords and understand the real limitations here.
4 Answers
Yes, it's doable! As long as the data centers are relatively close, latency shouldn't be a major concern, but keep in mind that ingress fees could get pretty high.
Sure, it’s possible! But why do you want to do it? The Kubelet is quite chatty with the control plane, and introducing latency is asking for trouble in production. It's an interesting experiment, but I wouldn’t stake my operations on it long-term.
You can give this a try depending on your workload. I think you'll be able to set up control planes in their respective clouds and configure a networking plugin to connect everything. A few resources that might help are:
- tryitands.ee,
- kubermatic.com,
- cloud.google.com/kubernetes-engine/multi-cloud/docs,
- developer.hashicorp.com/terraform/tutorials/networking/multicloud-kubernetes.
Yes, you can definitely set that up! But honestly, I wouldn't recommend it based on my experience. I tried linking multiple locations into one cluster using Wireguard, and it didn’t go well. Just some food for thought!
There are scenarios where it makes sense, like when workloads aren't time-sensitive, such as batch jobs. For seasonal spikes, using spot instances could be a great solution without needing dedicated on-prem resources.