Getting started with Amazon EKS

aws
Reading Time: 4 minutes

Amazon EKS, This guide assists you to create all of the required support to manage started with Amazon Elastic Kubernetes Service (Amazon EKS) using eksctl.


Its a simple command-line utility for generating and maintaining Kubernetes clusters on Amazon EKS. At the end of this tutorial, you will have a working Amazon EKS cluster that you can deploy applications.

The plans in this pattern create individual sources for you automatically that you must create manually when you create your cluster using the AWS Management Console.

If you’d preferably manually perform the greatest about the support to better understand how they interact with each other, then use the AWS Management Console to create your cluster and compute. 

Prerequisites

Before beginning this tutorial, you must install and configure the following materials and references that you need to build and maintain an Amazon EKS cluster.

  • kubectl – A command-line tool for working with Kubernetes clusters.
  • eksctl – A command-line tool for working with EKS clusters that automates many individual tasks. 
  • Required IAM permissions – The IAM security administrator that you’re using must have agreements to work with Amazon EKS IAM roles and service-linked roles, AWS CloudFormation, and a VPC and related resources

Amazon EKS clusters

An Amazon EKS cluster consists of two primary components:

  • The Amazon EKS control plane
  • Amazon EKS nodes that are enrolled with the control plane

The Amazon EKS control plane consists of administration plane nodes that run the Kubernetes software. These are etcd and the Kubernetes API server. The control plane runs in a piece of information maintained by AWS.The Kubernetes API is detected via the Amazon EKS endpoint compared with your cluster. Each Amazon EKS cluster control plane is single-tenant and unique and runs on its own set of Amazon EC2 instances.

All of the data collected by the etcd nodes and joined Amazon EBS volumes is encrypted using AWS KMS. The cluster control plane is provisioned over multiple Availability Zones and overlooked by an Elastic Load Balancing Network Load Balancer. Amazon EKS also provisions elastic network interfaces in your VPC subnets to implement connectivity of the control plane instances to the nodes.

The nodes appear in your AWS account and connect to your cluster’s control plane via the API server endpoint.

Amazon EKS – Updating a cluster

New Kubernetes versions have included important changes.

Therefore, we recommend that you test the behaviour of your applications against a new Kubernetes version before you update your production clusters. You can accomplish that by building a continuous integration workflow to test your application way before moving to a new Kubernetes version.

The update process consists of Amazon EKS beginning new API server nodes with the updated Kubernetes version to repair the existing ones. It performs standard infrastructure and readiness health checks for network traffic on these new nodes to verify that they’re working as expected. If any of these checks collapse, Amazon EKS reverts the infrastructure deployment, and your cluster continues on the previous Kubernetes version. Running applications aren’t affected, and your cluster is never left in a non-deterministic or unrecoverable state. Amazon EKS automatically backs up all distributed clusters, and tools survive to collect clusters if necessary. We are regularly estimating and promoting our Kubernetes infrastructure management methods.

To update the cluster, It needs two to three free IP addresses from the subnets that where provided when you created the cluster. If these subnets do not have available IP addresses, then the update can leave.

Additionally, if any of the subnets or security groups that where provided while cluster creation has been deleted, the cluster update process can fail.

Amazon EKS – Cluster Autoscaler

The Kubernetes Cluster Autoscaler automatically improves the number of nodes in your cluster when pods break or are rescheduled onto other nodes. The Cluster Autoscaler is typically established as a Deployment in your cluster. It uses leaders to assure high availability, but scaling is done by only one duplicate at a time.

Before you deploy the Cluster Autoscaler, make sure that you’re familiar with how Kubernetes ideas interface with AWS characteristics. The following terms are used throughout this topic:

  • Kubernetes Cluster Autoscaler – A core element of the Kubernetes control plane that makes scheduling and scaling arrangements. 
  • AWS Cloud provider implementation – An increase of the Kubernetes Cluster Autoscaler that performs the decisions of the Kubernetes Cluster Autoscaler by interacting with AWS products and sets such as Amazon EC2. 
  • Node groups – A Kubernetes concept for a group of nodes within a cluster. Node groups aren’t the right
  • Kubernetes resource, but they’re found as a generality in the Cluster Autoscaler, Cluster API, and other components. Nodes that are found inside a single node group might share several common characteristics such as labels and taints. However, they can still consist of more than one Availability Zone or instance type.
  • Amazon EC2 Auto Scaling groups – A feature of AWS that’s handled by the Cluster Autoscaler. Auto Scaling groups are proper for a large number of use situations. Amazon EC2 Auto Scaling groups are configured to launch instances that automatically join their Kubernetes cluster. They also apply labels and taints to their corresponding node resource in the Kubernetes API.

For reference, Managed group nodes are managed using Amazon EC2 Auto Scaling groups, and are agreeable with the Cluster Autoscaler.

This topic details how you can deploy the Cluster Autoscaler to your Amazon EKS cluster and configure it to modify your Amazon EC2 Auto Scaling groups.

Prerequisites

Before deploying the Cluster Autoscaler, you must meet the following prerequisites:

  • Have an existing Amazon EKS cluster – If you don’t have a cluster.
  • An existing IAM OIDC provider for your cluster. To conclude whether you hold one or need to create one.
  • Node groups with Auto Scaling groups tags – The Cluster Autoscaler requires the following tags on your Auto Scaling groups so that they can be auto-discovered.