Introduction to Kubernetes
Kubernetes is an open-source container orchestration platform (Originally developed by Google) designed to automate the deployment, scaling, and management of containerized applications. Kubernetes makes it easy to deploy and operate applications in a microservice architecture.
What is GKE?
Google Kubernetes Engine is a Management and orchestration way for Containers. The goal of GKE is to increase the potency of DevOps and development teams by comprising the complexity of setting up the Kubernetes cluster, the overlay network, etc.
What is a Kubernetes cluster?
A Kubernetes cluster is a set of nodes that run containerized applications. Containerizing applications package an app with its dependencies and some necessary services. In this way, Kubernetes clusters allow for applications to be more easily developed, moved, and managed.
Kubernetes clusters are comprised of one master node and a number of worker nodes.
The master node controls the state of the cluster; The master node is the origin for all task assignments.
- Scheduling and scaling applications
- Maintaining a cluster’s state
- Implementing updates
The worker nodes are the components that run these applications. Worker nodes perform tasks assigned by the master node.
A Kubernetes cluster contains six main components:
- API server: Exposes a REST interface to all Kubernetes resources. Serves as the front end of the Kubernetes control plane.
- Scheduler: Places containers according to resource requirements and metrics. Makes note of Pods with no assigned node, and selects nodes for them to run on.
- Controller Manager: Runs controller processes and reconciles the cluster’s actual state with its desired specifications.
Manages controllers such as node, endpoint and replication controllers.
- Kubelet: Ensures that containers are running in a Pod by interacting with the Docker engine, the default program for creating and managing containers. Takes a set of provided PodSpecs and ensures that their corresponding containers are fully operational.
- Kube-proxy: Manages network connectivity and maintains network rules across nodes. Implements the Kubernetes Service concept across every node in a given cluster.
- Etcd: Stores all cluster data. Consistent and highly available Kubernetes backing store.
These six components can each run on Linux or as Docker containers. The master node runs the API server, scheduler, and controller manager, and the worker nodes run the kubelet and kube-proxy.
Create a GKE cluster
Nodes are Compute Engine virtual machine (VM) instances that run the Kubernetes processes necessary to make them part of the cluster. You deploy applications to clusters, and the applications run on the nodes.
To create clusters in GKE, you need to choose a mode of development: Standard or Autopilot. If you use the Regular mode, your cluster is zonal. If you use the Autopilot mode, your cluster is regional.
Create a one-node Standard cluster named my-cluster:
gcloud container clusters create my-cluster --num-nodes=1
Create an Autopilot cluster named my-cluster:
gcloud container clusters create-auto my-cluster
Get authentication credentials for the cluster
After creating your cluster, you need to get authentication credentials to interact with the cluster:
gcloud container clusters get-credentials my-cluster
This command configures kubectl to use the cluster you created.
Deploy an application to the cluster
For this, we are creating an example web application, ex: hello-server
GKE uses Kubernetes objects to create and manage your cluster’s resources. Kubernetes provides the Deployment object for deploying stateless applications like web servers. Service objects define rules and load balancing for accessing your application from the internet.
Create the Deployment
Install gcloud tool through https://cloud.google.com/sdk/install.
gcloud components install kubectl
After installing the Kubernetes client, set default zone and project id.
gcloud config set project lateral-plane-327107 gcloud config set compute/zone us-east4-a
Now we need authentication credentials for our cluster.
gcloud container clusters get-credentials my-cluster
Our cluster name is my-cluster, but you can change it to your cluster name.
We will create a deployment named hello-server with the sample image of hello-server written in GoLang hello-app:1.0 with version 1 tag. (You can increase replicas by providing –replicas=3 flag to increase pod to 3).
kubectl create deployment hello-server --image=gcr.io/google-samples/hello-app:1.0
Now we can expose our deployment with a load balancer service on port 80. Now our app will be accessible for the Internet. (Load balancers are billed per Compute Engine’s load balancer pricing).
kubectl expose deployment hello-server --type LoadBalancer --port 80 --target-port 8080
We can now check for pods if replicas not provided by default will be one pod. Your pod will be in a running state, ready to serve:
kubectl get pods NAME READY STATUS RESTARTS AGE hello-server-695b9c7f6b-8fqkg 1/1 Running 0 3m15s
Inspect the hello-server Service by using kubectl get service:
kubectl get service hello-server NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE hello-server LoadBalancer 100.124.239.252 220.127.116.11 80:30243/TCP 76s
Hello-server is running on External IP of 18.104.22.168
View the application from your web browser by using the EXTERNAL IP address with the exposed port:
You have just deployed a containerized Web Application to Google Kubernetes Engine.