I've been reading a lot about Docker networking and came across the idea that using the host network driver could improve container performance. I'm deploying a GitLab ecosystem consisting of GitLab and ten runners on a dedicated virtual machine, and I'm wondering if it makes sense to create a separate virtual interface for each container. For context, the physical server hosting the VM has three 10Gig NICs, with one dedicated to the VM running the GitLab ecosystem. Any insights?
2 Answers
Are your runners going to be in shell mode or Docker mode? If you're using Docker mode like I recommend for a flexible CI system, then the networking for the runners won’t be a big deal as they will create new containers for each job anyway.
It's worth diving into how Docker networks operate behind the scenes—they're basically Linux network namespaces combined with iptables rules. Using host mode networking probably isn't the best approach for your setup. GitLab and runners usually don't have network demands that are affected by the latency introduced by NAT in a bridge network, which is often just a few milliseconds. Overall, this feels like a case of over-engineering or premature micro-optimization, honestly.
Definitely start with bridge mode for all your containers. Only look into performance tweaks if you notice any actual issues, then dig deeper.
Thanks for clarifying that! I think I'll skip the host network mode.

I've always preferred Docker mode on my laptop, you're spot on. I remember struggling with issues related to Docker-in-Docker when I first started using pipelines. I think I'll stick with bridge mode for now—thanks for the advice!