Terraform Configurations for AWS infrastructure

Reading Time: 5 minutes

What is Terraform?

Terraform is an infrastructure as code (IaC) tool that allows you to build, change, and version infrastructure safely and efficiently. This includes low-level components such as compute instances, storage, and networking, as well as high-level components such as DNS entries, SaaS features, etc. Terraform configurations manages both existing service providers and custom in-house solutions. In this blog, we will look at the important configurations that are required to spin up resources in AWS cloud. To learn more about Terraform as a tool, check out this blog for reference.

Scenario:

We will deploy nginx webserver and host a static webpage in two AWS elastic cloud compute instances. We will also deploy an application load balancer that will balance the load between these two instances. This setup will be generated by implementing Terraform as Infrastructure as a Code.

List of resources we will configure via Terraform configurations:

  1. EC2 instances
  2. Security Groups
  3. VPC
  4. Subnet
  5. Internet Gateway
  6. Route tables
  7. Route table association
  8. Load balancer

Terraform configuration files

Let’s look at the Terraform configuration files to spin up the above mentioned resources:

providers.tf


This file contains the information of the provider that Terraform uses to install the relevant plugins. Here, we are using the aws provider to configure the services of AWS. The shared credentials file contains the path where my AWS_ACCESS_KEY and AWS_SECRET_KEY are stored. This is required to authorize whether or not the Terraform is allowed to access the services in that aws account. The other ways to pass this information is to pass the AWS_ACCESS_KEY and AWS_SECRET_KEY as key value pairs, but make sure to remove them from your files before adding them to source code management repository. There are more ways to pass the credentials for authorization.

provider "aws" {
  shared_credentials_file = "/home/knoldus/.aws/cred"
  region     = var.aws_region
}

Locals.tf

This file contains the local variables the we declare and use in the configuration files.

locals {
  common_tags = {
    environment = var.environment
  }
}

network.tf

In this file, we will define the necessary network configurations such as VPC, subnets, internet gateway, route table, and security groups.

data "aws_availability_zones" "available" {}
# Define vpc
resource "aws_vpc" "vpc" {
  cidr_block           = var.vpc_cidr_block
  enable_dns_hostnames = var.enable_dns_hostnames
  tags = local.common_tags
}

# Define subnets
resource "aws_subnet" "subnet1" {
  cidr_block              = var.vpc_subnets_cidr_blocks[0]
  vpc_id                  = aws_vpc.vpc.id
  map_public_ip_on_launch = var.map_public_ip_on_launch
  availability_zone       = data.aws_availability_zones.available.names[0]

  tags = local.common_tags
}

resource "aws_subnet" "subnet2" {
  cidr_block              = var.vpc_subnets_cidr_blocks[1]
  vpc_id                  = aws_vpc.vpc.id
  map_public_ip_on_launch = var.map_public_ip_on_launch
  availability_zone       = data.aws_availability_zones.available.names[1]

  tags = local.common_tags
}

# Define Internet Gateway
resource "aws_internet_gateway" "igw" {
  vpc_id = aws_vpc.vpc.id

  tags = local.common_tags
}

# Define route table and route table association with the subnets
resource "aws_route_table" "rtb" {
  vpc_id = aws_vpc.vpc.id

  route {
    cidr_block = "0.0.0.0/0"
    gateway_id = aws_internet_gateway.igw.id
  }

  tags = local.common_tags
}

resource "aws_route_table_association" "rta-subnet1" {
  subnet_id      = aws_subnet.subnet1.id
  route_table_id = aws_route_table.rtb.id
}

resource "aws_route_table_association" "rta-subnet2" {
  subnet_id      = aws_subnet.subnet2.id
  route_table_id = aws_route_table.rtb.id
}

# Define security group
# ALB Security Group
resource "aws_security_group" "alb_sg" {
  name   = "nginx_alb_sg"
  vpc_id = aws_vpc.vpc.id

  #Allow HTTP from anywhere
  ingress {
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }
  #allow all outbound
  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }

  tags = local.common_tags

}

# Nginx security group 
resource "aws_security_group" "nginx-sg" {
  name   = "nginx_sg"
  vpc_id = aws_vpc.vpc.id

  # HTTP access from VPC
  ingress {
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    cidr_blocks = [var.vpc_cidr_block]
  }

  # outbound internet access
  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }

  tags = local.common_tags
}

instances.tf

In this file, we define the ec2 instances. The user data script contains the script to install nginx webserver in both the instances. It will delete the default nginx.html file for nginx and create a new html file that would contain some static information. This html file is different in both the instances just to depict the actual load balancing between two instances.

# Define ec2 Instance
resource "aws_instance" "nginx1" {
  ami                    = var.aws_ami
  instance_type          = var.instance_type
  subnet_id              = aws_subnet.subnet1.id
  vpc_security_group_ids = [aws_security_group.nginx-sg.id]
  key_name               = "demo"
  user_data = <<EOF
#! /bin/bash
sudo apt update
sudo apt install nginx -y
sudo service nginx start
sudo rm /var/www/html/index*.html
echo '<html><head><title>Pizza Server 1</title></head><body style=\"background-color:#1F778D\"><p style=\"text-align: center;\"><span style=\"color:#FFFFFF;\"><span style=\"font-size:28px;\">Pizza Server 1 🍕</span></span></p></body></html>' | sudo tee /var/www/html/index.html
EOF

  tags = local.common_tags

}

resource "aws_instance" "nginx2" {
  ami                    = var.aws_ami
  instance_type          = var.instance_type
  subnet_id              = aws_subnet.subnet2.id
  vpc_security_group_ids = [aws_security_group.nginx-sg.id]
  key_name               = "demo"
  user_data = <<EOF
#! /bin/bash
sudo apt update
sudo apt install nginx -y
sudo service nginx start
sudo rm /var/www/html/index*.html
echo '<html><head><title>Pizza Server 2</title></head><body style=\"background-color:#1F778D\"><p style=\"text-align: center;\"><span style=\"color:#FFFFFF;\"><span style=\"font-size:28px;\">Pizza Server 2 🍕</span></span></p></body></html>' | sudo tee /var/www/html/index.html
EOF

  tags = local.common_tags

}

loadbalancer.tf

This file contains the load balancer resource, target groups and listeners for this service. These target groups will be attached to both the instances to enable load balancing.

# Define application load balancer
resource "aws_lb" "nginx" {
  name               = "pizza-alb"
  internal           = false
  load_balancer_type = "application"
  security_groups    = [aws_security_group.alb_sg.id]
  subnets            = [aws_subnet.subnet1.id, aws_subnet.subnet2.id]

  enable_deletion_protection = false

  tags = local.common_tags
}

resource "aws_lb_target_group" "nginx" {
  name     = "nginx-alb-tg"
  port     = 80
  protocol = "HTTP"
  vpc_id   = aws_vpc.vpc.id

  tags = local.common_tags
}

resource "aws_lb_listener" "nginx" {
  load_balancer_arn = aws_lb.nginx.arn
  port              = "80"
  protocol          = "HTTP"

  default_action {
    type             = "forward"
    target_group_arn = aws_lb_target_group.nginx.arn
  }

  tags = local.common_tags
}

resource "aws_lb_target_group_attachment" "nginx1" {
  target_group_arn = aws_lb_target_group.nginx.arn
  target_id        = aws_instance.nginx1.id
  port             = 80
}

resource "aws_lb_target_group_attachment" "nginx2" {
  target_group_arn = aws_lb_target_group.nginx.arn
  target_id        = aws_instance.nginx2.id
  port             = 80
}

outputs.tf

This file generates the requested output. Here, we need the load balancer’s public dns as an output.

output "aws_alb_public_dns" {
  value = aws_lb.nginx.dns_name
}

vars.tf

This file contains the variables definition and description.

variable "aws_region" {
type = string
description = "Region for AWS Resources"
default = "us-east-1"
}

variable "enable_dns_hostnames" {
type = bool
description = "Enable DNS hostnames in VPC"
default = true
}

variable "vpc_cidr_block" {
type = string
description = "Base CIDR Block for VPC"
default = "10.0.0.0/16"
}

variable "vpc_subnets_cidr_blocks" {
type = list(string)
description = "CIDR Blocks for Subnets in VPC"
default = ["10.0.0.0/24", "10.0.1.0/24"]
}

variable "map_public_ip_on_launch" {
type = bool
description = "Map a public IP address for Subnet instances"
default = true
}

variable "instance_type" {
type = string
description = "Type for EC2 Instance"
default = "t2.micro"
}
variable "aws_ami" {
type = string
description = "AMI ID"
}
variable "environment" {
type = string
description = "Environment name"
default = "dev"
}

terraform.tfvars

This file contains the values for all variables. You can configure this file according to your use cases. The ami listed below is of ubuntu20.04 LTS.

aws_region                  = "us-east-1"

aws_ami                     = "ami-083654bd07b5da81d"

instance_type               = "t2.micro"

vpc_cidr_block              = "10.0.0.0/16"

vpc_subnets_cidr_blocks     = ["10.0.0.0/24", "10.0.1.0/24"]

environment                 = "dev"

map_public_ip_on_launch     = true

enable_dns_hostnames        = true

Apply the terraform configurations:

  1. Initialize Terraform in the working directory. It will initialize the backend and install all the necessary plugins.

terraform init

2. You may now begin working with Terraform. Try running “terraform plan” to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

terraform plan

3. Once you confirm the plan, apply the configuration changes.

terraform apply

4. To destroy the infrastructure, run

terraform destroy

Result:

You will get a loadbalancers public dns as an output. Copy and paste this to your web browser. Refresh the screen to check the load balancer’s functioning.

References:

https://blog.knoldus.com/?s=terraform

Written by 

Vidushi Bansal is a Software Consultant [Devops] at Knoldus Inc. She is passionate about learning and exploring new technologies.