Should I Use Self-Hosted or GitHub-Hosted Runners for My Kubernetes CI/CD?

0
5
Asked By CloudyChameleon42 On

Hey everyone! I'm looking for some advice on integrating GitHub Actions with my Kubernetes setup, especially when it comes to security and CI/CD architecture. Here's a quick overview of my current infrastructure: I have a Proxmox cluster consisting of two servers and a quorum device, all set up for high availability. Within this cluster, I have four VMs dedicated to Kubernetes, including one control plane and three worker nodes. My networking is managed through a Mikrotik firewall.

My goals are to effectively run CI/CD pipelines for container builds and deployments. I'm considering the pros and cons of using self-hosted GitHub Actions runners inside the Kubernetes cluster versus sticking with GitHub's hosted runners. I'm particularly interested in ensuring secure connectivity from the runners to the Kubernetes API on port 6443 without exposing it unnecessarily to the internet.

I have a few key questions:
1. With GitHub-hosted runners, is it typical to expose port 6443 publicly with IP allowlists, or should I explore using GitOps (like ArgoCD or Flux) to pull changes, or even setting up outbound VPN or Zero-Trust tunnels?
2. If I choose to go with self-hosted runners (like through actions-runner-controller on Kubernetes), what potential pitfalls should I be aware of regarding high availability, security (namespace isolation, secrets management, RBAC), and ongoing maintenance?
3. From your experience, what is the best and most secure method for CI/CD in this environment: GitHub-hosted with GitOps, GitHub-hosted with tunneling, fully self-hosted runners in Kubernetes, or something else?

If you've worked with similar setups (Proxmox, Kubernetes, GitHub Actions), I'd love to hear how you managed the runner integration and API server exposure. What strategies worked for you and what should I avoid? Thanks for your insights!

2 Answers

Answered By DevNinja77 On

Some people have opted for self-hosting GitLab on a shared cluster for pipelines that require accessing Kubernetes directly. It allows them to work over the local network. If your pipelines need frequent interaction with Kubernetes, it’s a viable option.

CloudyChameleon42 -

That’s a valid point! However, since all our repositories are on GitHub, setting up a whole GitLab stack seems like a lot of work. I think I’ll lean toward GitOps or the self-hosted runners for better local access.

Answered By ServerWhisperer93 On

GitHub-hosted runners are really powerful and free, so leveraging them might be a good idea, especially if your workflows don’t have networking or special hardware requirements. Just make sure to control the source code security and API access properly. If you do go self-hosted, I'd suggest hosting your source code too to keep everything streamlined. You might also want to check out Gitea with Actions enabled, since it can run GitHub Actions in a lighter way on your cluster!

K8sExplorer88 -

I totally agree about GitHub-hosted runners being convenient! But for security reasons, I want to keep tight control over access to the Kubernetes API. That’s why I’m looking into self-hosted options; they could also help with build performance with local caches. Gitea sounds cool, though - have you tried it in production? How stable does its Actions implementation feel?

Related Questions

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.