What are containers and how do they work?

226 views

Okay, conceptually I understand that they are virtualized machines running discreet applications to fulfill a specific function. The idea is that you’re consuming fewer resources. But that’s about as far as I’ve gotten.

How are they “completely isolated” when they’re sharing the OS? At some point they must be sharing resources and I don’t understand how this is possible when the whole goal is to keep system failures isolated to a specific blast radius.

I’m sure I’ll come up with more questions when I get an answer but that’s where I am atm. Thanks

In: 1

3 Answers

Anonymous 0 Comments

The OS provides various features which make it possible. [Namespaces][1] are used to isolate a group of processes from each other. [cgroups][3] are used to limit the resources (CPU time, memory) used by those processes. A function called [chroot][2] allows a process to only see a subset of the file system. And so on.

However, the OS provides these features in kind of a haphazard and disorganized way. They evolved over time, added by different developers at different times and for various reasons. The point of a high-level tool like docker is to create an abstraction – a so-called *container* – which is easier to reason about. We simply think of it as an “isolated” Linux machine, even though this is very much an illusion cobbled together out of half a dozen different OS features.

[2]: https://www.geeksforgeeks.org/linux-virtualization-using-chroot-jail/

[1]: https://en.wikipedia.org/wiki/Linux_namespaces

[3]: https://en.wikipedia.org/wiki/Cgroups

You are viewing 1 out of 3 answers, click here to view all answers.