Old school ‘virtualization’ virtualizes hardware for use by a virtual machine. A container virtualizes the operating system. So, necessarily, the container must run the same OS as the host. This technology has been around for a long time in some form or another, Docker is just one iteration. One thing to keep in mind, you don’t usually run a whole OS in a container, so it is probably better to say the container must be *compatible* with the host’s operating system.
Say I have an application that requires a certain version of Java, I could spin up an entire virtual machine running the desired OS and all of its little stuff and then I could install the correct version of java and then install the application. Or, I could ‘containerize’ the application by creating a container with the correct version of Java and the code that runs the application. That is a lot more lightweight, when the developer updates the app, I just destroy the existing container and put in the new one. Linux admins will recognize this idea as very similar to the idea of a chroot jail – another method of isolating processes.
The benefits are a little more mixed than a lot of people will admit to, I run a major application on Kubernetes (an orchestration platform, the underlying containerization is the dockerd runtime) and I am regularly surprised by what it *can’t* do. Performance monitoring is a joke, collecting logs is a massive PITA, it doesn’t (out of the box) do dynamic load balancing, etc. At the end of the day you are still running some developer’s crappy code but with an added layer of abstraction that is super difficult to explain to people.
Latest Answers