Hello readers, in this blog, I’ll be covering what Kubernetes is and will give you an overview of Kubernetes’ master-worker architecture and its components. So let’s get started.
What is Kubernetes?
Kubernetes is an open source container orchestration tool, or we can say it’s a container management tool. Here, I’ve used the term orchestration, which defines the management of containers which are: Deploying, Scaling, Networking, and Insighting.
Some people mix this with the containerizing tool like docker, which is not true. It helps in managing the containers, created with the help of any containerizing tool like docker.
Kubernetes: A Bigger Picture
Kubernetes works for orchestrating services running inside the containers. Our role is to dockerize the app and then pass it on the Kubernetes cluster in the form of some Kubernetes objects. Afterwards, the Kubernetes cluster takes care of those services.
Kubernetes cluster mainly consists of two types of nodes termed as the master node and the worker node.
The master and the worker
The master node, also known as the control plane of kubernetes, used to control the whole cluster. Master node monitors the cluster, makes the changes and schedules the work. The worker nodes (previously known as minions) runs the actual work. They report back to master and watch for changes.
We package the application by specifying a manifest in YAML or JSON format to tell desired state of our cluster. The manifest file includes:
- Images to use
- Ports to expose
- Desired replicas
The master takes the responsibility of deploying the desired state on the worker nodes of the cluster and get the app running to meet the actual state with the desired one.
Master includes a bunch of components that run on a single server. It is mainly responsible for running the whole cluster. It monitors the nodes and if any node fails, the master has the responsibility of shifting the workload to some other healthy node.
There are more than 1 master node for high availability and fault tolerance. As the workload is taken by the worker nodes, the master keeps itself free and looks after the cluster.
The master consists of 4 components as mentioned below:
This is the front end of the control plane. It acts as the gatekeeper for the entire cluster. This is the only master’s component to which we can talk to the cluster. It consumes a manifest file (defined in YAML or JSON) which declares the desired state of our application. and exposes the RESTful API’s on port 443 by default.
It also validates the supplied manifest file and then deploys it to the server. To interact with the api-server, we have a tool called
kubectl, also known as kube control.
As the name suggests, this component is used to physically schedule the pods to multiple nodes. The scheduler watches the api-server for new pods and then assign them to the workers.
The controller manager is the component to run the controllers. Controllers run the control loop that watches the shared state of the cluster through api server and make changes to match the current state with the desired state. These controllers include:
- Node Controller: Responsible for noticing and responding when nodes go down.
- Replication Controller: Responsible for maintaining the correct number of pods for every replication controller object in the system.
- Endpoints Controller: Responsible for joining Services and Pods
- Service Account and Token Controllers: Responsible for creating default accounts and API access tokens for new namespaces.
This is the memory of the cluster which is consistent and highly-available key value store used to store all cluster data. It is an open source distributed store which is consistent and watchable. It is considered as the only source of truth for the cluster and any component can query
etcd to get the state of the cluster. In the whole cluster, this is the only stateful part of the cluster, while all are stateless
Worker nodes are the actual workhorses of the cluster which actually carries out the work. In a cluster, the workers can scale up to 100s and even to 1000s as well. These workers perform tasks assigned to them by the master. The worker nodes are considered as cattle in the pets vs cattle analogy as whenever a node dies, the kubernetes recreates the workload of that node on a separate healthy node.
The worker consists of 3 components as mentioned below:
Kubelet is the main agent on the node. It makes sure that containers are running in a pod.
It watches the apiserver on master for work assignments which is delivered to kubelet in form of PodSpecs. Whenever it is notified of any new pod, it performs the tasks and sets a reporting channel back to the master server. If anything goes wrong on the node, then its the responsibility of kubelet to report it to the master and then the master decides what to do.
The container-runtime is responsible for running the containers. Kubernetes supports several container runtimes including
docker, containerd, cri-o. This component of the worker actually runs the pods, which have containers inside them. It performs pulling specified images, running them, starting and stopping their runtime instances called containers.
Kube-proxy is the network brain of the node. On each node, it makes sure that every pod gets its own IP address. All containers inside the pod shares that pod IP. To address individual container, you’ll be using the ports. It also provides lightweight load balancing across all pods in a service (way of hiding multiple pods behind single network address).
I hope this explains the architecture and the components of kubernetes. If you have any doubt, feel free to contact me email@example.com.
Thank you for sticking to the end. If you like this blog, please do show your appreciation by giving thumbs ups and share this blog and give me suggestions on how I can improve my future posts to suit your needs. Follow me to get updates on different technologies.