Introduction to Google Kubernetes

Table of contents
Reading Time: 3 minutes

Over the past two years, Containerization has provided a lot of flexibility to the developer in which most popular container technology is docker. Container provide the developer a virtual environment for isolated process or application on the host system. Before everything else lets take a look what used to happen in past.

Earlier when the number of systems for running application were limited they were mostly specified by name so that user can easily identify which software or application is running on which machine. If the server dies everyone used to rush for the backup and admin  felt lucky if he got an up-to-date backup. Obviously, the same won’t work for scaling the application and the situation with this approach is not feasible when there are hundreds or thousands of servers. There should be an automation which takes full responsibility of allocating resources to the application on the specific machine, not only this  continuous monitoring and resilience is also required.

One of the main reason for deploying a service on the container is that they are flexible, lightweight, and easily scalable while deploying on hundreds of machine. Now, the question arises who is going to manage the containers on the large set of machines. that’s, why kubernetes and other container orchestration came into picture Lets, understand what it is.

What is kubernetes?

Kubernetes (also referred to as “K8s”, “K8″ or sometimes even “the Kubes”) is an open source project by Google for managing the containerized application on the cluster by providing scaling, deployment, and maintenance of the application.


Google Kubernetes

Kubernetes uses the master-slave architecture where the master node is the control plane of the kubernetes cluster.  The master node is responsible for scheduling at cluster level as well as handling of events. To maintain high availability and reliability there could be multiple master nodes. Major Components of master node are:

API Server provides the REST API to communicate between the components of kubernetes out of which most operations are done with kubectl. API can also be accessed directly using REST calls.

Etcd is highly reliable distributed storage directory of the cluster. It stores the entire state of the cluster. In case of the multinode cluster, we need to configure etcd to maintain backup periodically. For accessing etcd one should require root permissions and its recommended to grant permission to only those nodes that require access.

Scheduler is the resource manager of kubernetes. Its just simply look for all the pods which are not assigned to any node provides nodes to run them. It contains all the information regarding resource requirement, data locality, hardware/software constraints etc.

Controller Manager is a non-terminating daemon that continuously maintains the state of a system. Its responsible for syncing the shared state of the cluster with API server. It includes the replication controller, pod controller, service controller and endpoint controller. We will understand controllers in depth on the upcoming blog.

Kubectl is a command line tool for running commands on the kubernetes cluster. We will explore more on this  in next blog which will explain the deployment of a microservice on kubernetes.

Node also known as minion is the worker machine receives workload to execute and updated the state of the cluster. A node may be a VM or physical machine whose job is to run the pods and is managed by master components. Here are the major components of Node.

A pod represents the group of containers which may be docker or rkt. All the container in a pod has the same IP address and port list which can communicate with localhost or any other IPC calls. Even the containers in the same pod use the same storage area. The idea behind the creation of pod is to run a list of containers dependent on each other and are closely related.

Kubelet is the representative of a node. It makes the communication of master node component and manages the running pods. Its responsible for the following services:

  • Getting pod secret from API server
  • Running pod’s container
  • Reporting the status of the node to master.
  • Maintain graceful running of container
  •  Mount Volumes for the container.

Kube-Proxy redirects the traffic for running application to correct pod. A pod can communicate with each other using their IP address. Kube-Proxy ensures that the IP address of Pod should not be accessible to the external environment. All the traffic from the external source is redirected by kube-proxy.

Hope You like the blog helpful. In the next blog, we will learn how to deploy a microservice using kubernetes. Meanwhile stay tuned and Happy Reading.


2. Mastering Kubernetes by Gigi sayfan