I'm developing an open-source manager for ARK servers that users can host themselves. Currently, the manager runs in a Docker container and operates multiple ARK server processes within the same container, using symlinks and `LD_PRELOAD` hacks to organize their configurations and save files. I'm thinking of switching to a model where each game server operates in its own container, allowing for better organization and isolation. However, for that to work, the manager would need access to the host Docker daemon, meaning I'd have to mount the host's `/var/run/docker.sock` inside the manager's container, which raises security concerns. The manager has a web API, and a separate frontend communicates with it (no privileged access needed). What are the real-world security issues here? Are there options to enhance security without introducing vulnerabilities? Is it worth moving to a container-based approach compared to the existing process-based model?
5 Answers
I recommend avoiding mounting the Docker socket altogether. Instead, consider using a proxy like the Docker socket proxy. This way, users can set up the proxy alongside your containers, ensuring that your manager container has limited access—for instance, it could create and stop containers but wouldn't have full access to the Docker host. It's essential to highlight the risks of providing Docker socket access in your documentation, but if you must, this proxy can help reduce the dangers a bit.
Portainer does something similar, so looking into their code could give insights into safety measures. At a minimum, your web API shouldn't be directly exposed on the internet to minimize risks.
Since it's self-hosted, your security implications are somewhat smaller, but granting privileged access can be risky. If a container does nothing but run management tasks, it's less likely to be compromised. However, any third-party code or shell commands through a REST API can pose severe risks, especially if set up carelessly. Instead, think about using a message queue or a shared volume for requests to enhance security without the typical API risks.
Hi there! I'm also working on this project and appreciate your insights. It seems clear that running a REST API in a privileged container does pose risks. Our discussion hinges on figuring out if this approach is worth those risks against other available alternatives. To clarify, the main container currently runs the manager with the REST API and server processes together. Splitting them into different containers may be more secure, but comes with trade-offs.
If the main goal is to isolate file systems for config and save files, you might also try spawning server processes using a chroot environment. This won't provide the same level of isolation as containers but can simplify file management. Users could still decide if they want the manager to operate the servers in a Kubernetes cluster, as separate processes in chroot, or directly as containers with access to Docker. Just consider how people will likely use it when deciding on this structure.
You're right that needing access to the Docker daemon introduces security risks, as it can lead to privilege escalation within the container. A better approach might be using a rootless solution like Podman, which offers hardening by allowing you to run containers under a non-root user. It's much safer and also makes the architecture cleaner by keeping each ARK server in its own container. You wouldn't need to rely on symlinks or LD_PRELOAD hacks to manage configurations, which could also leave you exposed to vulnerability risks if exploited.
Thanks for that informative response! I agree that while Podman seems promising, I'm cautious about depending on it entirely for security. I think splitting the manager's functionalities into separate containers might be the safer route. However, we do find ourselves torn because while individual containers per server seem logical, the security concerns make us second-guess whether it's truly worth it.
Thanks for the suggestion! Sadly, using a proxy still leaves the chance for someone to open containers that have access to the host machine if a vulnerability is found.