Networking has been changed a lot. From the era of hardware-based physical networking to the current world where networking is being defined by software, a lot of innovations and thought processes are continously being applied. In this
We know how the network engineer builds networks in data centers. Engineers program quite a few network devices like servers, routers, switches, firewalls etc. It is a fairly manual process. It might take a few days in programming network devices and setting rules in the firewall.
In the traditional datacenter, An operating system is installed on a physical server which is reserved to host a particular application. Each server has one or more network interface cards (NIC) to enable communication among the servers. NIC is attached to the external network infrastructure. In the above figure, NICs are attached to the switch, which enables efficient packet communication among the endpoints.
Virtualization is changing the mindset from physical to logical. It is a concept of creating more logical IT resources, called virtual systems, within a physical system.
Virtualization can be classified into 2 categories.
- System Level Virtualization
- Process Level Virtualization
System Level Virtualization.
In System level virtualization, hardware resources are shared by virtual machines, each of which include both an application and a complete operating system instance. A physical server running three VMs, would, for example, feature a hypervisor accompanied by three separate operating systems running on top.
Following are the main components of System Level Virtualization.
- Virtual Network Interface Card (vNIC)
- Virtual Switch (Open vSwitch).
The key innovation behind virtualization is an abstraction of computer hardware to allow multiple operating systems and applications to share the hardware. This innovation is known as Hypervisor or Virtual Machine Monitor. Hypervisors are further classified as type 1 and type 2 hypervisor.
Type 1 Hypervisor runs directly on physical hardware. This type of hypervisor is also considered as the bare-metal hypervisor. With direct access to the underlying hardware, it is the most efficient and best-performing hypervisor available. Hypervisors such as VMware ESXi, Microsoft Hyper-V server and KVM are examples of Type 1 hypervisors.
A Type 2 hypervisor is typically installed on top of an existing OS, and it’s called a hosted hypervisor because it relies on the host machine’s pre-existing OS to manage calls to CPU, memory, storage and network resources. Type 2 hypervisors include VMware Fusion, Oracle VM Server for x86, Oracle VM VirtualBox, Oracle Solaris Zones and VMware Workstation.
Hypervisor abstract the resources of a physical server. Multiple VMs can be created on top of it. The hypervisor may create one or more virtual network interface card (vNIC) for each VM. These NICs can appear a physical NIC to the VM but actually, represent the interface of NIC.
VM reduces the number of physical servers. With a few servers, you can spend less time on manual tasks required for server maintenance. Deploying VM is much faster than deploying the physical server. You might be running a VM and 15 minutes later it will be up and running. If we compare VM with the physical server in a data-center, the lifetime of a VM is a few months or weeks.
One of the key developments of virtualized networking infrastructure is the development of the virtual switch. The virtual switch attaches vNICs to the physical NICs of the server and—more importantly—ties vNICs to other vNICs within the server for local communication. It minimizes the overhead of network infrastructure. Within the virtual switch, the limit has nothing to do with network bandwidth. Linux natively incorporates layer-2 virtual switch within the kernel.
Distributed Virtual Switch is required to enable communication between VMs running
Process Level Virtualization
Process Level Virtualization is also known as Containerization. A server supporting three containerized applications requires just a single operating system, with each container sharing the operating system kernel with its companion containers. If we compare it with the physically hosted server, system VMs are incredibly fast but not fast enough to meet today’s requirements where applications are dynamic in nature. Instead of creating virtual machines and partitioning hardware resources that way, containerization seeks to construct various data containers at an operating system level. That means the containers will share an operating system, unlike system virtual machines, each of which has a cloned operating system to itself. Before jumping on the networking part lets see the building blocks of a container. The container is composed of the following 3 components.
- Layered file system
Union capable file-system provides the way to combining multiple directories into one. If you want to explore more about the Union file-systems then you can follow my blog Tale of a Container’s file system.
CGroup limits how much you can use. It is a Linux kernel feature that limits, accounts for, and isolates the resource usage (CPU, memory, disk I/O, network, etc.) of a collection of processes.
Namespace limit what you can see. the namespace is a feature of Linux kernel, that partition kernel resources such that one set of processes can see one set of resources while another set of resources can see the different set of resources. Since this blog is all about networking so here we will have our focus on network namespace.
The network of two containers are isolated by their network namespaces. In order to understand how 2 containers communicates with each other, we should understand how the communication channel can be established between 2 network namespaces.
In the above diagram, there are two distinct network-namespaces as Red Namespace and Green Namespace. These 2 distinct namespaces are communicated through OpenvSwith. In my next post, I will explain the procedure to establish communication between namespaces.