Kubernetes is a portable, extensible open-source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation. It has a large, rapidly growing ecosystem. Kubernetes builds upon a decade and a half of experience that Google has with running production workloads at scale, combined with best-of-breed ideas and practices from the community. Kubernetes services, support, and tools are widely available.
A few months back we were asked to set up a Kubernetes cluster over On-premises VMs. and deploy our microservices over there. Since then we are managing multiple Kubernetes clusters for different environments. Here is our experience which we want to share.
Setting up a Kubernetes Cluster
Kubernetes can run on various platforms: from your laptop to VMs on a cloud provider, to a rack of bare metal servers. A few months back we were asked to set up a Kubernetes cluster over On-premises VMs. There is a complete list of solutions available which can help us to set up a Kubernetes cluster. After doing some research, We decided to go with Kubespray. Kubespray is just a collection of Ansible scripts which can be used to set up, scale, upgrade and destroy a Kubernetes cluster. Kubespray can be used with AWS, GCE, Azure, OpenStack, vSphere, Packet (bare metal), Oracle Cloud Infrastructure (Experimental), or Baremetal.
Interacting with a Kubernetes Cluster
One can interact with a cluster either of using 3 options –
- Kubectl – Kubenetes comes with a powerful command-line-tool called Kubectl. It can be used to deploy and manage applications on Kubernetes. Using kubectl, one can inspect cluster resources; create, delete, and update components.
- Web UI( Dashboard ) – Dashboard is a web-based Kubernetes user interface. You can use Dashboard to deploy containerized applications to a Kubernetes cluster, troubleshoot your containerized application, and manage the cluster resources. You can use Dashboard to get an overview of applications running on your cluster, as well as for creating or modifying individual Kubernetes resources (such as Deployments, Jobs, DaemonSets, etc).
- Kubernetes Rest API – Kubernetes is highly API centric. The REST API is the fundamental fabric of Kubernetes. All operations and communications between components, and external user commands are REST API calls that the API Server handles. Consequently, everything in the Kubernetes platform is treated as an API object and has a corresponding entry in the API. There are multiple client libraries available for using the Kubernetes API from various programming languages such as – Go, Python, Java, etc.
Persistence storage for Application
When it comes to containers, managing storage is a distinct problem from managing computation. Kubernetes two new API resources: PersistentVolume and PersistentVolumeClaim for users and administrators that abstracts details of how storage is provided from how it is consumed. There are multiple types of PersistentVolume. It gives you the flexibility to use multiple storage options such as- GCEPersistentDisk, AWSElasticBlockStore, AzureFile, NFS, CephFS, Local storage, etc. Here is the complete list of supported options. We have deployed multiple stateful applications over Kubernetes such as Prometheus, Mysql, Lagom services, etc and we use NFS as a PersistentVolume type to persist the data.
Monitor the Kubernetes Cluster
Kubenetes provides an add-on service called kube-state-metrics that listens to the Kubernetes API server and generates metrics about the state of the objects. It is not focused on the health of the individual Kubernetes components, but rather on the health of the various objects inside, such as deployments, nodes, and pods. The metrics are exported on the HTTP endpoint /metrics on the listening port (default 80). They are served as plaintext. They are designed to be consumed either by Prometheus itself or by a scraper that is compatible with scraping a Prometheus client endpoint. We have deployed Prometheus and Alertmanager over the Kubernetes itself. We have put some rules over the Prometheus to generate the alert in case of any unwanted activity such as- node loss, excessive resource consumption, etc.
Exposing services from a Kubernetes cluster
To expose any service outside of the cluster Kubernetes has some resources called – LoadBalancer and NodePort, etc. Since LoadBalancer is not supported on bare-metal, we had only one option which is NodePort. It exposes the service on each Node’s IP at a static port the NodePort. You’ll be able to contact the NodePort service, from outside the cluster, by requesting <NodeIP>:<NodePort>