This blog contains a detailed description for creating an AWS EKS Cluster by CloudFormation. We will see how we can set up this through AWS Cloud Formation with a working example.
Introduction

Amazon Elastic Kubernetes Cluster (EKS) is a AWS managed service which can be used to run Kubernetes in which control planes or nodes are maintained by AWS. Like any other service offered by AWS, Kubernetes resources will be fully managed by AWS themselves, which gives less overload for developers on maintaining them. Through this AWS also makes sure that these resources are highly available and reliable every time.

Now we will discuss, Amazon CloudFormation as infrastructure automation or Infrastructure-as-Code (IaC) tool provided by AWS which can automate the setup and deployment of various Infrastructure-as-a-Service (IaaS) offerings on the AWS CloudFormation supports virtually every service that runs in AWS. Through CloudFormation we can set up many AWS Services or configure various workloads like EC2 compute service, the S3 storage service, and the IAM service for configuring access control by using Cloudformation templates.CloudFormation is not the only way to configure and deploy services on AWS. We can handle these processes manually using the AWS command-line interface, API, or Web console.
Advantages of CloudFormation
CloudFormation provides us a range of benefits that make cloud service deployment and management faster and more efficient.
-
Deployment speed
-
Scaling up
-
Service integration
-
Consistency
-
Security
-
Easy updates
-
Auditing and change management
Terms and Concepts
Before we understand the set up of AWS EKS by Cloudformation template, we first know the CloudFormation Template Terms and Concepts, it helps us to understand core concepts around which CloudFormation templates structure resources, variables, and functions.
- Stacks: stack is a term which refers to a collection of multiple AWS resources like EC2, S3 storage, and IAM access controls that we can manage together using a single template.
- Template: CloudFormation template is simply a text file, which defines how AWS services or resources should be configured and deployed.
- Parameters: In order to apply unique settings for each deployment, we can use parameters. Parameters define custom values for each deployment that CloudFormation will apply at runtime.
- Change sets: If We want to update a deployment using CloudFormation, we can update the template we used to create the deployment. We can then create a change set, which summarizes the changes that the updated template will apply before making the change.
CloudFormation Template
There are two ways to create a template:
- By using a pre-existing template as the foundation.
- Writing entirely a new template from scratch.
We are here using a newly created cloudformation template in YAML, which consists of all the important resources required for EKS cluster like parameters, Networking part like VPC, Subnets,Internet Gateway, Worker Node group, etc as shown in the diagram below.


# This Cloudformation template will creae following modules
# VPC, 2-public subnest, InternetGateway, 2 private subnets, 2 Persistence subnets (private)
# EKS ckuster with One node group
# One postgres rds
AWSTemplateFormatVersion: '2010-09-09'
Description: EKS cluster using a VPC with two public subnets
Parameters:
EKSClusterName:
Type: String
Description: Name of k8s cluster
Default: eks-cluster
NumWorkerNodes:
Type: Number
Description: Number of worker nodes to create
Default: 2
WorkerNodesInstanceType:
Type: String
Description: EC2 instance type for the worker nodes
Default: t3.medium
KeyPairName:
Type: String
Description: Name of an existing EC2 key pair (for SSH-access to the worker node instances)
Default: eks-test
Mappings:
VpcIpRanges:
Option1:
VPC: 10.100.0.0/16
PublicSubnet1 : 10.100.0.0/20
PublicSubnet2: 10.100.16.0/20
PrivateSubnet1: 10.100.32.0/20
PrivateSubnet2: 10.100.48.0/20
PersistenceSubnet1: 10.100.64.0/20
PersistenceSubnet2: 10.100.80.0/20
# IDs of the "EKS-optimised AMIs" for the worker nodes:
# https://docs.aws.amazon.com/eks/latest/userguide/eks-optimized-ami.html
# IMPORTANT NOTE: Choose AWS EKS compatible ami IDs only
EksAmiIds:
us-east-2:
Standard: ami-0b614a5d911900a9b
Resources:
#============================================================================#
# VPC
#============================================================================#
VPC:
Type: AWS::EC2::VPC
Properties:
CidrBlock: !FindInMap [ VpcIpRanges, Option1, VPC ]
EnableDnsSupport: true
EnableDnsHostnames: true
Tags:
- Key: Name
Value: !Ref AWS::StackName
PublicSubnet1:
Type: AWS::EC2::Subnet
Properties:
VpcId: !Ref VPC
CidrBlock: !FindInMap [ VpcIpRanges, Option1, PublicSubnet1 ]
# AvailabilityZone: !Select
# - 0
# - !GetAZs ""
AvailabilityZone: !Select [ 0, !GetAZs '' ]
Tags:
- Key: Name
Value: !Sub "${AWS::StackName}-PublicSubnet1"
- Key: kubernetes.io/role/elb
Value: 1
- Key: !Sub "kubernetes.io/cluster/${AWS::StackName}"
Value: shared
PublicSubnet2:
Type: AWS::EC2::Subnet
Properties:
VpcId: !Ref VPC
CidrBlock: !FindInMap [ VpcIpRanges, Option1, PublicSubnet2 ]
# AvailabilityZone: !Select
# - 1
# - !GetAZs ""
AvailabilityZone: !Select [ 1, !GetAZs '' ]
Tags:
- Key: Name
Value: !Sub "${AWS::StackName}-PublicSubnet2"
- Key: kubernetes.io/role/elb
Value: 1
- Key: !Sub "kubernetes.io/cluster/${AWS::StackName}"
Value: shared
PrivateSubnet1:
Type: AWS::EC2::Subnet
Properties:
VpcId: !Ref VPC
CidrBlock: !FindInMap [ VpcIpRanges, Option1, PrivateSubnet1 ]
# AvailabilityZone: !Select
# - 1
# - !GetAZs ""
AvailabilityZone: !Select [ 0, !GetAZs '' ]
Tags:
- Key: Name
Value: !Sub "${AWS::StackName}-PrivateSubnet1"
- Key: kubernetes.io/role/internal-elb
Value: 1
- Key: !Sub "kubernetes.io/cluster/${AWS::StackName}"
Value: shared
PrivateSubnet2:
Type: AWS::EC2::Subnet
Properties:
VpcId: !Ref VPC
CidrBlock: !FindInMap [ VpcIpRanges, Option1, PrivateSubnet2 ]
# AvailabilityZone: !Select
# - 1
# - !GetAZs ""
AvailabilityZone: !Select [ 1, !GetAZs '' ]
Tags:
- Key: Name
Value: !Sub "${AWS::StackName}-PrivateSubnet2"
- Key: kubernetes.io/role/internal-elb
Value: 1
- Key: !Sub "kubernetes.io/cluster/${AWS::StackName}"
Value: shared
PersistenceSubnet1:
Type: AWS::EC2::Subnet
Properties:
VpcId: !Ref VPC
CidrBlock: !FindInMap [ VpcIpRanges, Option1, PersistenceSubnet1 ]
# AvailabilityZone: !Select
# - 1
# - !GetAZs ""
AvailabilityZone: !Select [ 0, !GetAZs '' ]
Tags:
- Key: Name
Value: !Sub "${AWS::StackName}-PersistenceSubnet1"
PersistenceSubnet2:
Type: AWS::EC2::Subnet
Properties:
VpcId: !Ref VPC
CidrBlock: !FindInMap [ VpcIpRanges, Option1, PersistenceSubnet2 ]
# AvailabilityZone: !Select
# - 1
# - !GetAZs ""
AvailabilityZone: !Select [ 1, !GetAZs '' ]
Tags:
- Key: Name
Value: !Sub "${AWS::StackName}-PersistenceSubnet2"
InternetGateway:
Type: AWS::EC2::InternetGateway
Properties:
Tags:
- Key: Name
Value: !Ref AWS::StackName
VPCGatewayAttachment:
Type: AWS::EC2::VPCGatewayAttachment
Properties:
InternetGatewayId: !Ref InternetGateway
VpcId: !Ref VPC
RouteTable:
Type: AWS::EC2::RouteTable
Properties:
VpcId: !Ref VPC
Tags:
- Key: Name
Value: !Sub "${AWS::StackName}-PublicSubnets"
InternetGatewayRoute:
Type: AWS::EC2::Route
# DependsOn is mandatory because route targets InternetGateway
# See here: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-attribute-dependson.html#gatewayattachment
DependsOn: VPCGatewayAttachment
Properties:
RouteTableId: !Ref RouteTable
DestinationCidrBlock: 0.0.0.0/0
GatewayId: !Ref InternetGateway
NatGateway1EIP:
Type: AWS::EC2::EIP
DependsOn: VPCGatewayAttachment
Properties:
Domain: vpc
NatGateway2EIP:
Type: AWS::EC2::EIP
DependsOn: VPCGatewayAttachment
Properties:
Domain: vpc
NatGateway1:
Type: AWS::EC2::NatGateway
Properties:
AllocationId: !GetAtt NatGateway1EIP.AllocationId
SubnetId: !Ref PublicSubnet1
NatGateway2:
Type: AWS::EC2::NatGateway
Properties:
AllocationId: !GetAtt NatGateway2EIP.AllocationId
SubnetId: !Ref PublicSubnet2
PrivateRouteTable1:
Type: AWS::EC2::RouteTable
Properties:
VpcId: !Ref VPC
Tags:
- Key: Name
Value: !Sub ${AWS::StackName} Private Routes (AZ1)
DefaultPrivateRoute1:
Type: AWS::EC2::Route
Properties:
RouteTableId: !Ref PrivateRouteTable1
DestinationCidrBlock: 0.0.0.0/0
NatGatewayId: !Ref NatGateway1
PrivateRouteTable2:
Type: AWS::EC2::RouteTable
Properties:
VpcId: !Ref VPC
Tags:
- Key: Name
Value: !Sub ${AWS::StackName} Private Routes (AZ1)
DefaultPrivateRoute2:
Type: AWS::EC2::Route
Properties:
RouteTableId: !Ref PrivateRouteTable2
DestinationCidrBlock: 0.0.0.0/0
NatGatewayId: !Ref NatGateway2
PublicSubnet1RouteTableAssociation:
Type: AWS::EC2::SubnetRouteTableAssociation
Properties:
SubnetId: !Ref PublicSubnet1
RouteTableId: !Ref RouteTable
PublicSubnet2RouteTableAssociation:
Type: AWS::EC2::SubnetRouteTableAssociation
Properties:
SubnetId: !Ref PublicSubnet2
RouteTableId: !Ref RouteTable
PrivateSubnet1RouteTableAssociation:
Type: AWS::EC2::SubnetRouteTableAssociation
Properties:
SubnetId: !Ref PrivateSubnet1
RouteTableId: !Ref PrivateRouteTable1
PrivateSubnet2RouteTableAssociation:
Type: AWS::EC2::SubnetRouteTableAssociation
Properties:
SubnetId: !Ref PrivateSubnet2
RouteTableId: !Ref PrivateRouteTable2
PersistenceSubnet1RouteTableAssociation:
Type: AWS::EC2::SubnetRouteTableAssociation
Properties:
SubnetId: !Ref PersistenceSubnet1
RouteTableId: !Ref RouteTable
PersistenceSubnet2RouteTableAssociation:
Type: AWS::EC2::SubnetRouteTableAssociation
Properties:
SubnetId: !Ref PersistenceSubnet2
RouteTableId: !Ref RouteTable
#============================================================================#
# Control plane
#============================================================================#
ControlPlane:
Type: AWS::EKS::Cluster
Properties:
Name: !Ref AWS::StackName
Version: "1.19"
RoleArn: !GetAtt ControlPlaneRole.Arn
ResourcesVpcConfig:
SecurityGroupIds:
- !Ref ControlPlaneSecurityGroup
SubnetIds:
- !Ref PrivateSubnet1
- !Ref PrivateSubnet2
ControlPlaneRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
Effect: Allow
Principal:
Service:
- eks.amazonaws.com
Action: sts:AssumeRole
ManagedPolicyArns:
- arn:aws:iam::aws:policy/AmazonEKSClusterPolicy
- arn:aws:iam::aws:policy/AmazonEKSServicePolicy
#============================================================================#
# Control plane security group
#============================================================================#
ControlPlaneSecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
GroupDescription: Security group for the elastic network interfaces between the control plane and the worker nodes
VpcId: !Ref VPC
Tags:
- Key: Name
Value: !Sub "${AWS::StackName}-ControlPlaneSecurityGroup"
ControlPlaneIngressFromWorkerNodesHttps:
Type: AWS::EC2::SecurityGroupIngress
Properties:
Description: Allow incoming HTTPS traffic (TCP/443) from worker nodes (for API server)
GroupId: !Ref ControlPlaneSecurityGroup
SourceSecurityGroupId: !Ref WorkerNodesSecurityGroup
IpProtocol: tcp
ToPort: 443
FromPort: 443
ControlPlaneEgressToWorkerNodesKubelet:
Type: AWS::EC2::SecurityGroupEgress
Properties:
Description: Allow outgoing kubelet traffic (TCP/10250) to worker nodes
GroupId: !Ref ControlPlaneSecurityGroup
DestinationSecurityGroupId: !Ref WorkerNodesSecurityGroup
IpProtocol: tcp
FromPort: 10250
ToPort: 10250
ControlPlaneEgressToWorkerNodesHttps:
Type: AWS::EC2::SecurityGroupEgress
Properties:
Description: Allow outgoing HTTPS traffic (TCP/442) to worker nodes (for pods running extension API servers)
GroupId: !Ref ControlPlaneSecurityGroup
DestinationSecurityGroupId: !Ref WorkerNodesSecurityGroup
IpProtocol: tcp
FromPort: 443
ToPort: 443
#============================================================================#
# Worker nodes security group
# Note: default egress rule (allow all traffic to all destinations) applies
#============================================================================#
WorkerNodesSecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
GroupDescription: Security group for all the worker nodes
VpcId: !Ref VPC
Tags:
- Key: Name
Value: !Sub "${AWS::StackName}-WorkerNodesSecurityGroup"
- Key: !Sub "kubernetes.io/cluster/${ControlPlane}"
Value: "owned"
WorkerNodesIngressFromWorkerNodes:
Type: AWS::EC2::SecurityGroupIngress
Properties:
Description: Allow all incoming traffic from other worker nodes
GroupId: !Ref WorkerNodesSecurityGroup
SourceSecurityGroupId: !Ref WorkerNodesSecurityGroup
IpProtocol: "-1"
WorkerNodesIngressFromControlPlaneKubelet:
Type: AWS::EC2::SecurityGroupIngress
Properties:
Description: Allow incoming kubelet traffic (TCP/10250) from control plane
GroupId: !Ref WorkerNodesSecurityGroup
SourceSecurityGroupId: !Ref ControlPlaneSecurityGroup
IpProtocol: tcp
FromPort: 10250
ToPort: 10250
WorkerNodesIngressFromControlPlaneHttps:
Type: AWS::EC2::SecurityGroupIngress
Properties:
Description: Allow incoming HTTPS traffic (TCP/443) from control plane (for pods running extension API servers)
GroupId: !Ref WorkerNodesSecurityGroup
SourceSecurityGroupId: !Ref ControlPlaneSecurityGroup
IpProtocol: tcp
FromPort: 443
ToPort: 443
#============================================================================#
# Worker nodes (auto-scaling group)
#============================================================================#
WorkerNodesAutoScalingGroup:
Type:
UpdatePolicy:
AutoScalingRollingUpdate:
MinInstancesInService: 1
MaxBatchSize: 1
Properties:
LaunchConfigurationName: !Ref WorkerNodesLaunchConfiguration
MinSize: !Ref NumWorkerNodes
MaxSize: !Ref NumWorkerNodes
VPCZoneIdentifier:
- !Ref PrivateSubnet1
- !Ref PrivateSubnet2
Tags:
- Key: Name
Value: !Sub "${AWS::StackName}-WorkerNodesAutoScalingGroup"
PropagateAtLaunch: true
# Without this tag, worker nodes are unable to join the cluster:
- Key: !Sub "kubernetes.io/cluster/${ControlPlane}"
Value: "owned"
PropagateAtLaunch: true
WorkerNodesRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
Effect: Allow
Principal:
Service:
- ec2.amazonaws.com
Action: sts:AssumeRole
ManagedPolicyArns:
- arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy
- arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy
- arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly
# IMPORTANT NOTE: We have to define NodeGroup (type: AWS::EKS::Nodegroup), without this no woker nodes will be attach to cluster
WorkerNodegroup:
Type: AWS::EKS::Nodegroup
DependsOn: ControlPlane
Properties:
ClusterName: !Sub "${AWS::StackName}"
NodeRole: !GetAtt WorkerNodesRole.Arn
ScalingConfig:
MinSize:
Ref: NumWorkerNodes
DesiredSize:
Ref: NumWorkerNodes
MaxSize:
Ref: NumWorkerNodes
Subnets:
- !Ref PrivateSubnet1
- !Ref PrivateSubnet2
Tags:
- Key: Name
Value: "WorkerNodesAutoScalingGroup"
WorkerNodesLaunchConfiguration:
Type: AWS::AutoScaling::LaunchConfiguration
# Wait until cluster is ready before launching worker nodes
DependsOn: ControlPlane
Properties:
AssociatePublicIpAddress: false
IamInstanceProfile: !Ref WorkerNodesInstanceProfile
ImageId: !FindInMap
- EksAmiIds
- !Ref AWS::Region
- Standard
InstanceType: !Ref WorkerNodesInstanceType
KeyName: !Ref KeyPairName
SecurityGroups:
- !Ref WorkerNodesSecurityGroup
# IMPORTANT NOTE: This code bootstrap some cfn settings on our ec2 machine, it require some parameters like
# --stack <AWS::StackName>, --resource <NodeGroupName>, --region <AWS::region>
# /usr/bin/ping -c 5 google.com ( To ensure that our node have internet connectivity via NATGateway )
UserData:
Fn::Base64: !Sub |
#!/bin/bash
set -o xtrace
/etc/eks/bootstrap.sh ${ControlPlane}
/opt/aws/bin/cfn-signal \
--exit-code $? \
--stack ${AWS::StackName} \
--resource WorkerNodeGroup \
--region ${AWS::Region}
/usr/bin/ping -c 5 google.com
WorkerNodesInstanceProfile:
Type: AWS::IAM::InstanceProfile
Properties:
Roles:
- !Ref WorkerNodesRole
Prerequisite
-
AWS Account.
-
AWS CLI.
-
Cloudformation Template in yaml.
We can create EKS Cluster through Cloudformation template by two ways:
- AWS CLI
- AWS Console.
Steps to create AWS EKS Cluster by using AWS Cloudformation Template in AWS Console.
- Open AWS console and navigate to AWS Cloudformation

2. Click on create stack.

3. Choose template ready as we are using the created template and specify the location of the template file.

4. Specify stack details.

5. Configure stack options like Tags, IAM roles for EKS cluster.


6. Review the configuration for EKS Cluster specified in the template.

If we have created the IAM role for our EKS already then uncheck the IAM role confirmation and if not then check the radio to create IAM role for our EKS cluster and click create stack.

We can check the status of EKS cluster creation in Event sections as shown in image below.


We can see here our cluster is ready to use.

Now we can connect to our cluster through AWS CLI. Run command to configure and get access to the AWS account and provide Access-key, secret-key, region name and format for output or we can also create a profile for future use.
aws configure

Now, Connect with our newly created eks cluster by following command.
aws eks update-kubeconfig --name eks-cluster
aws eks describe cluster --name eks-cluster

Now Check Nodes attached to eks cluster.
kubectl get nodes

Now we deploy one test application, nginx in the default namespace and port-forward the pod.
kubectl run nginx - -image=nginx
kubectl port-forward nginx 8080:80


Full info .. Great Article 👍