Terraform (Infrastructure as Code)

photo of turned on laptop computer
Reading Time: 4 minutes

In this blog, we are going to create AWS infrastructure using Terraform. Basically, we can name it “Code as Infrastructure”

So before moving to the setup let’s take a little introduction.

Introduction:

It is a tool for building, changing, and versioning infrastructure safely and efficiently. Terraform can manage existing and popular service providers as well as custom in-house solutions. It is created by hashicorp.

The terraform infrastructure(i.e aws resources =Code as Infrastructure) can manage includes low-level components such as compute instances, storage, and networking, as well as high-level components such as DNS entries, SaaS features, etc

In order to create infrastructure and destroy, we have to run following commands in the given sequence.

  • terraform init: When you create a new configuration — or check out an existing configuration from version control — you need to initialize the directory then we need to use this command.
  • terraform validate: Validate your configuration. If your configuration is valid, It will return a success message.
  • terraform plan: To create an execution plan. It is a convenient way to check whether the execution plan for a set of changes matches your expectations without making any changes to real resources or to the state
  • terraform apply: Running Terraform in automation.
  • terraform destroy: The destroy command terminates resources defined in your configuration.

Before moving to the architecture, you need to focus on prerequisites.

To follow this tutorial you will need:

  1. An aws account
  2. The AWS CLI installed
  3. Your AWS credentials configured locally.

The architecture we are going to create is:

As you can see in the diagram we need to create a vpc , subnets, nat gateway and ec2 instances into two diffrant subnets.

So here the actual architecture , we going to create in the form of modules. So, will look into module first, then will move towords creating .tf files

Terraform Modules:

A module is a set of .tf i.e terraform configuration files in a single directory. Even a simple configuration consisting of a single directory with one or more .tf files is a module. When you run commands directly from such a directory, it is considered the root module

Note: The each module is going to create in the ‘module/’ directory

You may have a simple set of Terraform configuration files such as:

.├── LICENSE
 ├── README.md
 ├── main.tf
 ├── variables.tf
 ├── outputs.tf

So we need to create .tf files for every modules separately. Module could be individual resource like VPC , SUBNET, EC2.
Once you are done with creating you module. You need to call them in main.tf file as follws (‘main.tf’ name could be any)

Calling modules:

Terraform commands will only directly use the configuration files in one directory, which is usually the current working directory. However, your configuration can use module blocks to call modules in other directories. When Terraform encounters a module block, it loads and processes that module’s configuration files.

main.tf

  module "website_s3_bucket" 
   {  source = "./modules/aws-s3-static-website-bucket"
      bucket_name = "<UNIQUE BUCKET NAME>"
      tags = 
        {    Terraform   = "true"    Environment = "dev"  }
   }

So lets start with the vpc. Whenever you start writing the configuration you need to create the file with the extension .tf .

myvpc.tf

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 3.0"
    }
  }
}

provider aws {}

resource "aws_vpc" "rd_vpc" {
  cidr_block           = var.cidr_block
  instance_tenancy     = var.tenancy
  enable_dns_support   = var.dns_support
  enable_dns_hostnames = var.dns_hostnames

  tags = {
    "iac" = "terraform"
    "app" = "dextrus"
  }
}

variables.tf

variable "cidr_block" {
  type        = string
  description = "The CIDR block for the VPC."
}

variable "tenancy" {
  type        = string
  description = "A tenancy option for instances launched into the VPC."
}

variable "dns_support" {
  type        = string
  description = "A boolean flag to enable/disable DNS support in the VPC."
}

variable "dns_hostnames" {
  type        = string
  description = "A boolean flag to enable/disable DNS hostnames in the VPC"
}

You can provide the value in myvpc.tf file but this is not a good practice.

output.tf

output "id" {
  value       = "${aws_vpc.rd_vpc.id}"
  description = "The ID of the VPC."
}

Now let’s devide this vpc in to part section using the subnets.

subnets.tf

resource "aws_subnet" "subnet" {
  count = length(var.subnet_cidr)

  vpc_id                  = var.vpc_id
  cidr_block              = var.subnet_cidr[count.index]
  availability_zone       = var.az[count.index]
  map_public_ip_on_launch = true
  tags = {
    "iac"       = "terraform"
    "app"       = "dextrus"
    "component" = "subnet"
  }
}

variables.tf

variable "vpc_id" {
  type        = string
  description = "VPC Id."
}

variable "subnet_cidr" {
  type = list
}

variable "az" {
  type = list
}

Now you can see here we have created two resources using two different files. We can add as many resources into a single file as well.

To create the whole infrastructure shown in the diagram, I pushed all my script onto my git repo. And used differant modules to individual resource. Here

So in this way build our aws infra or create aws resources by ‘code as infrastructure’

So in this blog we did not talk in details , so you can refer following links to understand.

You can refer another blog as well here

Reference:

Written by 

Sakshi Gawande is a software consultant at "KNOLDUS" having more than 2 years of experience. She's working as a DevOps engineer. She always wants to explore new things and solve problems herself. on personal, she likes dancing, painting, and traveling.