Clusters

Reading Time: 4 minutes

Microservices is a software architecture style that advocates for many granular services. Each of these services perform a single business function. Each microservice is a self-contained, independently deployable piece of a larger application. It interacts with other components, typically via well-defined REST APIs. Clusters is an important concept that has been introduced to implement microservices in a more efficient way.

Adopting a microservices architecture consisting of containerized applications paves the way for more efficient use of infrastructure. It enriches close control of application runtime environments, and the ability to automatically scale. However, one of the major tradeoffs in moving to a microservices-oriented architecture are the added complexities in managing a constantly evolving distributed system. Container orchestration systems were designed to reduce some of the operations overhead. It abstracted away the underlying infrastructure and automating the deployment and scaling of containerized applications. Systems such as Kubernetes, Marathon and Apache Mesos, and Swarm simplify the task of deploying and managing fleets of running containers by implementing some or all of the following core functionality:

  • Container Scheduling 
  • Load Balancing 
  • Service Discovery 
  • Cluster Networking
  • Health Checking and State Management
  • Autoscaling
  • Rolling Deployments
  • Declarative Configuration

Let’s look at these features in more detail.

Container Scheduling:

When deploying a container or sets of identical containers, a scheduler manages allocating the desired resources like CPU and memory. It then assigns the containers to cluster member nodes with these resources available. In addition, a scheduler may implement more advanced functionality like container prioritization. It balances out sets of identical containers across different members and regions for high availability.

Load Balancing in clusters:

Once deployed into a cluster, sets of running containers need some load balancing component to manage the distribution of requests from both internal and external sources. This can be accomplished using a combination of cloud provider load balancers, as well as load balancers internal to the container orchestration system.

Service Discovery:

Running containers and applications need some way of finding other apps deployed to the cluster. Service discovery exposes apps to one another and external clients in a clean and organized fashion using either DNS or some other mechanism, such as local environment variables.

Clusters Networking:

Clusters also need to connect running applications and containers to one another across machines managing IP addresses and assignment of network addresses to cluster members and containers. Networking implementations vary across container cluster projects. Docker Swarm bake a set of networking features directly into the cluster. Kubernetes impose a minimal set of requirements for any networking implementation. It allows administrators to roll out their own custom overlay network solution.

Health Checking and State Management:

A core feature implemented by Cloud Native applications is health reporting, usually via a REST endpoint. This allows orchestrators to reliably check the state of running applications and only direct traffic towards those that are healthy. Also using this endpoint, orchestrators repeatedly probe running apps and containers for “liveness” and self-heal by restarting those that are unresponsive. 

Autoscaling in clusters:

As load increases on a given application, more containers should be deployed to match this growth in demand. Container orchestrators handle scaling applications by monitoring standard metrics such as CPU or memory use, as well as user-defined telemetry data. The orchestrator then increases or decreases the number of running containers accordingly. Some orchestration systems also provide features for scaling the cluster and adding additional cluster members should the number of scheduled containers exceed the amount of available resources. These systems can also monitor utilization of these members and scale the cluster down accordingly, rescheduling running containers onto other cluster members

Rolling Deployments:

Container orchestration systems also implement functionality to perform zero-downtime deploys. Systems can roll out a newer version of an application container incrementally, deploying a container at a time, monitoring its health using the probing features described above, and then killing the old one. They also can perform blue-green deploys. In this, two versions of the application run simultaneously and traffic is cut over to the new version once it has stabilized. This also allows for quick and painless rollbacks, as well as pausing and resuming deployments as they are carried out.

Declarative Configuration:

Another core feature of some container orchestration systems is deployment via declarative configuration files. The user “declares” which desired state they would like for a given application (for example, four running containers of an NGINX web server). Then the system takes care of achieving that state by launching containers on the appropriate members, or killing running containers. This declarative model enables the review, testing, and version control of deployment and infrastructure changes. In addition, rolling back applications version can be as simple as deploying the previous configuration file. In contrast, imperative configuration requires developers to explicitly define and manually execute a series of actions to bring about the desired cluster state, which can be error-prone, making rollbacks difficult.

Clusters in Kubernetes:

Open-source container clusters and their managed equivalents have evolved and gradually taken on large-scale production workloads. Kubernetes and its expanding ecosystem of Cloud Native projects have become the platform of choice for managing and scheduling containers. By implementing all of the features described above, Kubernetes empowers developers to scale alongside their success. The managed Kubernetes offerings provide them with even greater flexibility while minimizing DevOps administration time and software operations costs.

A Kubernetes cluster is a set of node machines for running containerized applications. A Kubernetes cluster has a desired state, which defines which applications or other workloads should be running, along with which images they use, which resources should be made available for them, and other such configuration details. A desired state is defined by configuration files made up of manifests, which are JSON or YAML files that declare the type of application to run and how many replicas are required to run a healthy system.

Organizations that want to use Kubernetes at scale or in production will have multiple clusters, such as for development, testing, and production, distributed across environments and need to be able to manage them effectively.

References:

https://www.redhat.com/en/topics/containers/what-is-a-kubernetes-cluster#:~:text=A%20Kubernetes%20cluster%20is%20a,more%20compute%20machines%2C%20or%20nodes.

https://kubernetes.io/docs/concepts/cluster-administration/

Written by 

Vidushi Bansal is a Software Consultant [Devops] at Knoldus Inc. She is passionate about learning and exploring new technologies.

Discover more from Knoldus Blogs

Subscribe now to keep reading and get access to the full archive.

Continue reading