I'm diving into how Docker works and I've gathered that it virtualizes part of the OS but operates with a Linux kernel to keep things lightweight. I also learned there are solutions to run Docker containers on other OSs by providing some kind of substitute Linux kernel. So, in a way, the container ends up running in a Linux environment, right? My concern is, if I want to deploy the application in a non-Linux environment later on, do I have to redo all the dependency management? That seems to defeat the purpose of using Docker. Or can I just use it within the container, although that might add unwanted overhead in deployment? I feel like I'm missing some important bits here, so any insight would really help! Thanks!
3 Answers
Actually, there is also a Windows base image available for Docker, so it's not limited to just Linux applications. However, you're spot on that container tech really grew from the Linux ecosystem, and it's definitely more commonly used and supported there.
You bring up a good point about deployment! Remember, Docker containers are meant to stay intact once they reach their destination. You don't crack them open; rather, they can connect to each other or the outside world as needed.
Just to clarify, Docker isn't actually virtualizing anything in the traditional sense. It uses Linux namespaces and cgroups for process isolation. If you check the processes on the Linux host running a container, you can see them listed right there.
I noticed on the Windows base image repo that there are host restrictions. Like, the build of the base image must align with the host’s build, or it requires virtualization. Could complicate things!