Hello Readers, I hope you all are doing well. Today we will learn how to deploy the EKS cluster on AWS using terraform.
What is EKS?
Amazon Elastic Kubernetes Service (Amazon EKS) is a managed service that you can use to run Kubernetes on AWS without installing, operating, and maintaining your own Kubernetes control nodes. Kubernetes is an open-source system for automating the deployment, scaling, and management of containerized applications.
You can read more here.
What is Terraform?
Terraform is a free and open-source infrastructure as code (IAC) that can help to automate the deployment and management of the remote servers. Read more about Terraform from here.
In this blog, I will create an AWS EKS cluster with the help of Terraform scripts.
Prerequisites for this blog
We must have knowledge of AWS, Terraform, and Kubernetes:
Steps
I have divided this process into steps according to the requirements below:
Now, We are creating terraform scripts for the EKS.
Step 1:-
Here, we create a vars.tf file for storing environment variables:
variable "access_key" {
default = "<Access-Key>"
}
variable "secret_key" {
default = "<Secret-Key>"
}
Step 2:-
Here, we will create a main.tf file for AWS Configuration:
provider "aws" {
region = "ap-south-1"
access_key = "${var.access_key}"
secret_key = "${var.secret_key}"
}
data "aws_availability_zones" "aws_zones" {
state = "available"
}
- data “aws_availability_zones” “zones” will provide the list of availability zone for the specific region which we have defined in the above script.
Step 3:-
Now we will create a vpc.tf file for AWS VPC:
variable "region" {
default = "us-east-2"
}
data "aws_availability_zones" "available" {}
locals {
cluster_name = "EKS-Cluster"
}
module vpc {
source = "terraform-aws-modules/vpc/aws"
version = "3.2.0"
name = "VPC"
cidr = "10.0.0.0/16"
azs = data.aws_availability_zones.available.names
private_subnets = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]
public_subnets = ["10.0.4.0/24", "10.0.5.0/24", "10.0.6.0/24"]
enable_nat_gateway = true
single_nat_gateway = true
enable_dns_hostnames = true
tags = {
"Name" = "VPC"
}
public_subnet_tags = {
"Name" = "sample-Public-Subnet"
}
private_subnet_tags = {
"Name" = "sample-Private-Subnet"
}
}
- In the above code, VPC will have 3 public and private subnets.
- The above code will create the AWS VPC of 10.0.0.0/16 CIDR range in the us-east-1 region
Step 4:-
Now, We are creating security.tf file for AWS Security Group.
resource "aws_security_group" "worker_group_one" {
name_prefix = "worker_group_one"
vpc_id = module.vpc.vpc_id
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = [
"10.0.0.0/8"
]
}
}
resource "aws_security_group" "worker_group_two" {
name_prefix = "worker_group_two"
vpc_id = module.vpc.vpc_id
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = [
"10.0.0.0/8"
]
}
}
resource "aws_security_group" "all_worker" {
name_prefix = "all_worker"
vpc_id = module.vpc.vpc_id
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = [
"10.0.0.0/8"
]
}
}
- We have defined 2 security groups for 2 worker node groups in the above code.
- We are allowing port 22 for the SSH connection.
Step 5:-
Create eks.tf file for the EKS Cluster :
module "eks"{
source = "terraform-aws-modules/eks/aws"
version = "17.1.0"
cluster_name = local.cluster_name
cluster_version = "1.20"
subnets = module.vpc.private_subnets
tags = {
Name = "EKS-Cluster"
}
vpc_id = module.vpc.vpc_id
workers_group_defaults = {
root_volume_type = "gp2"
}
worker_groups = [
{
name = "First-Group"
instance_type = "t2.micro"
asg_desired_capacity = 2
additional_security_group_ids =
[aws_security_group.worker_group_one.id]
},
{
name = "Second-Group"
instance_type = "t2.micro"
asg_desired_capacity = 1
additional_security_group_ids =
[aws_security_group.worker_group_two.id]
},
]
}
data "aws_eks_cluster" "cluster" {
name = module.eks.cluster_id
}
data "aws_eks_cluster_auth" "cluster" {
name = module.eks.cluster_id
}
- Here, we are using a terraform module for AWS EKS.
- In the above code, We are creating 2 worker groups with the desired capacity of 3 instances which are t2.micro type.
Step 6:-
Create kubernetes.tf file for terraform Kubernetes provider
provider "kubernetes" {
host = data.aws_eks_cluster.cluster.endpoint
token = data.aws_eks_cluster_auth.cluster.token
cluster_ca_certificate =
base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
}
- In the above code, we are using a recently created cluster as the host and authentication token as the token
- We are using the cluster_ca_certificate for the CA certificate
Step 7:-
Create .tf file for outputs
- Create outputs.tf file and add the below content in it.
output "cluster_id" {
value = module.eks.cluster_id
}
output "cluster_endpoint" {
value = module.eks.cluster_endpoint
}
- The above code will give output the name of our cluster and expose the endpoint of our cluster.
Step 8:-
Run the below command to Initialize the working directory:
terraform init
- We must run this command in the current working directory. It will download all the required providers and all the modules for the terraform.
Step 9:-
Run the plan command:
terraform plan
- This command will show the execution plan for the terraform.
Step 10:-
Now, We are on the last step:
terraform apply -auto-approve
Step 11:-
We can verify every resource which we have created-
- EKS Cluster:




2. VPC & other resources:



3. Subnets:



4. Security Group:



5. IAM Role:



6. Auto Scaling Groups:



7. EC2 Instances:



Conclusion
In this blog, we have learned about how we can deploy an EKS cluster using terraform. If you find this blog helpful and like to read more then do like, and share it with your friends.