Hey everyone! I'm diving into a K3s home setup mostly for educational purposes, but I also plan to host some client websites (like WordPress), personal projects (using Laravel), and a few useful tools (like Plex). I'd love to get some feedback to see if I'm overthinking it or if everything looks solid!
Here's what I've got so far: My setup is fully provisioned via Ansible, and all my servers communicate through a WireGuard mesh network. I'm using a Virtual IP from Hetzner that routes to one of two servers running HAProxy as a load balancer. If anything goes wrong, Keepalived will handle the switch. I'm planning to replace HAProxy with Caddy soon since my company is making that switch as well.
The load balancers direct traffic to three K3s workers, which will act as ingress servers hosted on various platforms (like Hetzner, OVH, DigitalOcean, and Oracle) but they need to be in different locations or data centers to avoid issues.
Next up, I'm looking to implement MetalLB to expose Traefik on those ingress nodes, which will handle everything else.
My main question is whether I'm on the right track with my setup, if I'm using these components effectively, and if things might be a bit too complicated for what I'm trying to achieve. Ultimately, I want a high availability (HA) setup, which I can scale down to save costs but can easily ramp up again using Ansible whenever necessary.
Thanks in advance for any insights you can share!
3 Answers
This sounds like a fun home lab project! Just be cautious with the setup since hosting WordPress with K3s may come across as overcomplicating things for a personal project. You might find it easier to run WordPress traditionally on a straightforward setup within a data center, with solid PHP sandboxing. But hey, if this is mostly for learning, go for it!
Splitting your nodes across different cloud providers isn’t a bad idea, but keep an eye out for egress charges from cross-node traffic—each provider varies on this. For efficiency, you could tag your nodes and set deployment scheduling rules to minimize routing delays. You probably don't want your web server connecting to a database on a different provider's node if you can avoid it.
That’s a great point! I need to come up with a solid database solution. I’m considering whether to run a separate DB server or just keep it in containers within the cluster. Right now, I’m using a managed DB from DigitalOcean, but it's a bit pricey!
Home setups plus Kubernetes usually lead to complexity, but honestly, your setup seems good overall. There’s no "right way" in the K8s world, so as long as it works for you and meets your needs, you're on the right path!
I totally get what you're saying. The sites I'm hosting are for friends and family, so I’m not charging them much. It’s more about the learning experience for me!