Hello Readers! In this blog we are going to learn How to Autoscale EKS Instance groups using Kubernetes Cluster Autoscaler. We will use both managed and unmanaged EKS node groups.
Before starting we will firstly see What is Amazon Elastic Kubernetes Service (Amazon EKS). If you want to know more about EKS , you can follow my blog link: https://blog.knoldus.com/how-to-deploy-kubernetes-cluster-on-amazon-eks/
Amazon Elastic Kubernetes Service (Amazon EKS)
Its a simple command-line utility for generating and maintaining Kubernetes clusters on Amazon EKS. At the end of this tutorial, you will have a working Amazon EKS cluster that you can deploy applications.
Amazon EKS clusters
An Amazon EKS cluster consists of two primary components:
- The Amazon EKS control plane.
- Amazon EKS nodes that are enrolled with the control plane.
Before starting, you must install and configure the following materials for Amazon EKS cluster.
- kubectl – A command-line tool for working with Kubernetes clusters.
- eksctl – A command-line tool for working with EKS clusters that automates many individual tasks.
Amazon EKS – Cluster Autoscaler
The Kubernetes Cluster Autoscaler automatically improves the number of nodes in your cluster. When pods break or are reschedulonto other nodes. The Cluster Autoscaler is typically established as a Deployment in your cluster.
It uses leaders to assure high availability, but scaling is done by only one duplicate at a time. If you want to explore more, you can go through this link: https://blog.knoldus.com/getting-started-with-amazon-eks/?shared=email&msg=fail&
Prerequisites
Before deploying the Cluster Autoscaler, you must meet the following prerequisites:
- Have an existing Amazon EKS cluster – If you don’t have a cluster.
- An existing IAM OIDC provider for your cluster. To conclude whether you hold one or need to create one.
- Node groups with Auto Scaling groups tags – The Cluster Autoscaler requires the following tags on your Auto Scaling groups so that they can be auto-discovered.
Let’s get started !
First of all we need to create the EKS Cluster, I’m going to use eksctl. But you can use terraform or you can use the aws console to create the cluster. Firstly we will create one directory and after that will create YAML file. So I’m going to give it a name eks.yaml.
---
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: eks-name
region: us-east-1
version: "1.20"
availabilityZones:
- us-east-1a
- us-east-1b
managedNodeGroups:
- name: managed-nodes
labels:
role: managed-nodes
instanceType: t3.medium
minSize: 1
maxSize: 10
desiredCapacity: 1
volumeSize: 20
nodeGroups:
- name: unmanaged-nodes
labels:
role: unmanaged-nodes
instanceType: t3.medium
minSize: 1
maxSize: 10
desiredCapacity: 1
volumeSize: 20
We are going to use the managed and unmanaged node groups and auto scaler can work with both of them. But aws recommends that you would use the managed node groups. It can gracefully drain the node and reschedule those pods different nodes. In case the unmanaged node groups it will simply terminate the node.
Let’s create the cluster using this command and usually take about 15-20 minutes to create the cluster.
eksctl create cluster -f eks.yaml
So, the cluster is ready so to verify the connection, we can use this command:
kubectl get svc
So, there are few requirements as above mention before you can use and deploy the autoscaler. For example when you create the kubernetes cluster and instance groups they are created the Autoscaling groups in aws.
Let’s go-to-the first AWS Management Console and click on the EKS. Select the cluster, As you can see this is our cluster name the eks-name cluster and the version.

And we will go to the EC2 Dashboard and searching for Auto Scalling Groups, you are going to find two Auto Scaling Groups, the first one is for managed nodes and the second one for unmanaged nodes.



For example if we click on it and scroll all the way down. There are two labels that have to present before you can use Auto Scaler.
So, the second requirement to create the open id connect and to do let’s go back to the EKS cluster. We need to copy the open id connect url. We will go to the configuration tab and this is open id connect provider url. Let’s copy this, then navigate to IAM and go to the identity providers.



Click on Add Providers.
And click add provider then it’s going to be the open id connect, let’s paste our url and get thumbprint.



And for the audience let’s paste the default value sts.amazonaws.com and let’s create this provider.



Now, let’s create the policy that we are going to use for Auto scaler. Let’s got to the policies, click on the create policy json. And paste the policy and you can find this below link : https://github.com/antonputra/tutorials/tree/main/lessons/070
So this is the policy that will allow our Auto Scalar to adjust of number of nodes. Let’s click next and then again next and let’s give it a name EKSClusterAutoscalerPolicy and click on create policy.



And the second one to create the roles. Let’s go to the roles click on the create roles it’s going to be web identity. Let’s select the provider that we just created open id connect provider. And audience for now just select the default one click next permissions. Let’s attach our policy that we just created the Auto scaler policy click on next, next. And give it a name EKSClusterAutoscalerPolicy and click on create roles.






Now let’s deploy the Auto Scaler let’s create the folder K8s. We will create the first file for the Auto scaler with name 0-cluster-autoscaler.yaml. The first we are going to create the service account. We are going to add this annotation and replace with this ARN of the role.
---
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-addon: cluster-autoscaler.addons.k8s.io
k8s-app: cluster-autoscaler
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::940583193868:role/EKSClusterAutoScalerPolicy
name: cluster-autoscaler
namespace: kube-system
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: cluster-autoscaler
namespace: kube-system
labels:
app: cluster-autoscaler
spec:
replicas: 1
selector:
matchLabels:
app: cluster-autoscaler
template:
metadata:
labels:
app: cluster-autoscaler
annotations:
cluster-autoscaler.kubernetes.io/safe-to-evict: 'false'
spec:
serviceAccountName: cluster-autoscaler
containers:
- image: k8s.gcr.io/autoscaling/cluster-autoscaler:v1.20.0
name: cluster-autoscaler
resources:
limits:
cpu: 100m
memory: 300Mi
requests:
cpu: 100m
memory: 300Mi
# https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/cloudprovider/aws/README.md
command:
- ./cluster-autoscaler
- --v=4
- --stderrthreshold=info
- --cloud-provider=aws
- --skip-nodes-with-local-storage=false
- --expander=least-waste
- --node-group-auto-discovery=asg:tag=k8s.io/cluster-autoscaler/enabled,k8s.io/cluster-autoscaler/eks-name
# Update cluster
- --balance-similar-node-groups
- --skip-nodes-with-system-pods=false
volumeMounts:
- name: ssl-certs
mountPath: /etc/ssl/certs/ca-certificates.crt #/etc/ssl/certs/ca-bundle.crt for Amazon Linux Worker Nodes
readOnly: true
imagePullPolicy: "Always"
volumes:
- name: ssl-certs
hostPath:
path: "/etc/ssl/certs/ca-bundle.crt"
now we will deploy it and now it’s going to create all resources.



You can check the logs by using this command



You can find a lot of useful logs and if you want to debug to useful read all of the output. And the next step is watch will just repeat this command every two seconds.
watch kubectl get pods



So, now we have four and well expected now we have two pods in running state. And two pods in pending state because we don’t have enough nodes and then auto scaler. You can find all this information in the auto scaler output. If you want to expand your cluster you can do it but right now i’m not going to increase the number of nodes.
watch kubectl get nodes



now we have two instance appear one for the managed groups and second one for the unmanaged groups.
So Autoscaler is able to set appropriate desired state, desired capacity for each instance groups and you can see that here the min and max=10 and desired capacity=1. So Auto scaler cannot only increase the number of instances it also decrease the number of instances.
And at last if you want to delete your cluster you can use this below command
eksctl delete cluster -f eks.yaml



Conclusion:
Therefore In this blog we saw with a few easy steps how we can How to Autoscale EKS Instance groups using Kubernetes Cluster Autoscaler. We will use both managed and unmanaged EKS node groups. You can also change the configuration as per your requirement.
Hey, readers! Thank you for sticking up till the end. If you have any questions/feedbacks regarding this blog, I am reachable at gayatri.singh@knoldus.com.