Docker is not a new term to almost all of us. It is stealing the thunder everywhere.
But what exactly Docker is?
In simple words, Docker is a software containerization platform, meaning you can build your application, package them along with their dependencies into a container and then these containers can be easily shipped to run on other machines.
Okay, but what is Containerization anyway?
Containerization, also called container-based virtualization and application containerization — is an OS-level virtualization method for deploying and running distributed applications without launching an entire VM for each application. Instead, multiple isolated systems, called containers, are run on a single control host and access a single kernel.
A container image is a lightweight, stand-alone, executable package of a piece of software that includes everything needed to run it: code, runtime, system tools, system libraries, settings.
So the main aim is to package the software into standardized units for development, shipment, and deployment.
Let’s get this clear with this example.
Suppose there’s a Linux application which is written in Scala and R. So, in order to avoid any version conflicts for Linux, Scala, and R, docker will just wrap this application in a container with all the versions and dependencies and deploy it on any OS or server without any version-hassle.
Now, all we need to do is to run this container without worrying about the dependent software and libraries.
So, the process is really simple. Each application will run on a separate container and will have its own set of libraries and dependencies. This also ensures that there is process level isolation, meaning each application is independent of other applications, giving developers surety that they can build applications that will not interfere with one another.
Containers v/s Virtual Machines
Containers are an abstraction at the app layer that packages code and dependencies together. Multiple containers can run on the same machine and share the OS kernel with other containers, each running as isolated processes in userspace. Containers take up less space than VMs (container images are typically tens of MBs in size) and start almost instantly.
As you can see in case of Containerization, there’s a Host OS then above that there’ll be containers having dependencies and libraries for each of the application, which makes processing and execution very fast. There is no guest OS here and utilizes a host’s operating system, share relevant libraries & resources as and when needed, unlike virtual machines.
Virtual machines (VMs) are an abstraction of physical hardware turning one server into many servers. The hypervisor allows multiple VMs to run on a single machine. Each VM includes a full copy of an operating system, one or more apps, necessary binaries and libraries – taking up tens of GBs. VMs can also be slow to boot.
In this case of virtualization, there is a host operating system on which there are 3 guest operating systems running which is nothing but the virtual machines. But running multiple Virtual Machines on the same host operating system leads to performance degradation as each will have its own kernel and set of libraries and dependencies. This takes up a large chunk of system resources, i.e. hard disk, processor and especially RAM.
So, that was a quick understanding of Docker, what are containerization and virtualization. In the next blog, we will try our hands on Docker.
Hope it helped! 🙂
5 thoughts on “What is Docker?3 min read”
Reblogged this on Anuj's Blog.
Good article. I references this in a post on applying Docker to build up a bitcoin network
Comments are closed.