Amazon WorkSpaces
Amazon WorkSpaces enables you to provision virtual, cloud-based Windows or Linux desktops for your users and that is called WorkSpaces. WorkSpaces eliminates the need to procure and deploy hardware or install the complex or required software. You can quickly add or remove users as your needs change. Users can access their virtual desktops from multiple devices or web browsers. You can follow this link to know more.
In this blog, we will learn how to deploy AWS WorkSpaces on AWS using Terraform. We will create some other resources also as per our requirement like VPC, subnets, IAM roles, etc. I have divided this deployment into steps. You can follow these steps very easily.
Steps to Deploy AWS workspace
Step 1: Create three files
First We will create a few files to store the terraform script in it like:
provider.tf
resource.tf
module.tf
Step 2: Define the Provider
First, We will define the provider with an access key and secret key. In this deployment, I will use the credential profile name. We can simply define credentials with profile by just execute the below command:
aws configure --profile <profile_name>
As we have already created a file with provider.tf. We will store this script in that file.
provider.tf
provider "aws" {
profile = "terraform"
region = "ap-northeast-1"
}
Step 3: Deploying the Network
This option is optional for us because if we have already a network deployed then we can use that also otherwise we can create. Here, I am creating a VPC with two public and two private subnets in the ap-northeast-1 region using the VPC module.
We will store the script into the module.tf file
module.tf
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
name = "demo-dev"
cidr = "10.10.0.0/16"
azs = ["ap-northeast-1a", "ap-northeast-1c"]
private_subnets = ["10.10.1.0/24", "10.10.2.0/24"]
public_subnets = ["10.10.3.0/24", "10.10.4.0/24"]
enable_nat_gateway = true
single_nat_gateway = true
one_nat_gateway_per_az = false
enable_dns_hostnames = true
enable_dns_support = true
tags = {
Name = "demo-dev"
Environment = "Development"
}
}
Step 4: Deploying the AWS Managed Directory Service
Now WE are going to deploy an AWS-managed directory service using terraform. This is also an optional option because if we have already deployed then we can use that also. We can have 3 types of directory services such as SimpleAD, ADConnector, or MicrosoftAD. I wrote a blog on it. You can follow that blog also to know more about this deployment in detail. Please check my blog How to Deploy AWS Directory Service using Terraform.
The setting Size is only applicable to SimpleAD and ADConnector options.
resource.tf
resource "aws_directory_service_directory" "aws-managed-ad" {
name = "demo.local"
description = "Muzakkir Managed Directory Service"
password = "Admin@123"
edition = "Standard"
type = "MicrosoftAD"
vpc_settings {
vpc_id = module.vpc.vpc_id
subnet_ids = module.vpc.private_subnets
}
tags = {
Name = "Muzakkir-managed-ad"
Environment = "Development"
}
}
Step 5: Updating the DHCP Options in the VPC to Use AWS Directory Service DNS Servers
Now, We will need to update the DHCP( Dynamic Host Configuration Protocol ) Options in the VPC to join machines to the directory. It is also an optional option means if we have an updated DHCP option.
resource.tf
resource "aws_vpc_dhcp_options" "dns_resolver" {
domain_name_servers = aws_directory_service_directory.aws-managed-
ad.dns_ip_addresses
domain_name = "demolocal"
tags = {
Name = "demo-dev"
Environment = "Development"
}
}
resource "aws_vpc_dhcp_options_association" "dns_resolver" {
vpc_id = module.vpc.vpc_id
dhcp_options_id = aws_vpc_dhcp_options.dns_resolver.id
}
Step 6 : Defining the IAM role for workspace
AWS workspace service requires an IAM role to launch workspace. We just simply add this below code. Below is the code to create this IAM role:
resource.tf
data "aws_iam_policy_document" "workspaces" {
statement {
actions = ["sts:AssumeRole"]
principals {
type = "Service"
identifiers = ["workspaces.amazonaws.com"]
}
}
}
resource "aws_iam_role" "workspaces-default" {
name = "workspaces_DefaultRole"
assume_role_policy = data.aws_iam_policy_document.workspaces.json
}
resource "aws_iam_role_policy_attachment" "workspaces-default-service-access" {
role = aws_iam_role.workspaces-default.name
policy_arn = "arn:aws:iam::aws:policy/AmazonWorkSpacesServiceAccess"
}
resource "aws_iam_role_policy_attachment" "workspaces-default-self-service-access" {
role = aws_iam_role.workspaces-default.name
policy_arn = "arn:aws:iam::aws:policy/AmazonWorkSpacesSelfServiceAccess"
}
Step 7 : Defining an AWS WorkSpaces Directory
In this section, We will create the AWS WorkSpaces Directory, which is use to store and manage information for our Amazon WorkSpaces and users.
Here, I have used the minimal configuration to configure the aws_workspaces_directory but you can configure it as per your requirement.
resource.tf
resource "aws_workspaces_directory" "workspaces-directory" {
directory_id = aws_directory_service_directory.aws-managed-ad.id
subnet_ids = module.vpc.private_subnets
depends_on = [aws_iam_role.workspaces-default]
}
Step 8: List of WorkSpaces Bundles
We will need the Amazon WorkSpaces Bundle IDs to launch the WorkSpaces. To get the list, we need to open the AWS CloudShell or an AWS CLI Console and type:
aws workspaces describe-workspace-bundles --owner AMAZON
Step 9 : Defining the Amazon WorkSpaces Bundle
When we run the above command. We will get the list of all images. We can use that info and will define the bundle to create a workspace. Now, We will create two data bundles for workspaces with two vCPU, 4GB memory, and 50GB storage for both.
provider.tf
# This is a Windows Standard Bundle
data "aws_workspaces_bundle" "standard_windows" {
bundle_id = "wsb-gk1wpk43z"
}
# This is a Linux Standard Bundle
data "aws_workspaces_bundle" "standard_linux" {
bundle_id = "wsb-clj85qzj1"
}
Step 10: Defining a KMS to Encrypt WorkSpaces Disk Volumes
In this step, we will deploy a KMS Key to encrypt WorkSpaces Disk Volumes. This is an optional option. If we do not want to encrypt disk volume or we have an existing KMS. So it’s up to us.
resource.tf
resource "aws_kms_key" "workspaces-kms" {
description = "Muzakkir KMS"
deletion_window_in_days = 7
}
Step 11: Defining an Amazon WorkSpaces
Finally, we have all the required resources so we are ready to deploy the WorkSpaces:
resource.tf
resource "aws_workspaces_workspace" "workspaces" {
directory_id = aws_workspaces_directory.workspaces-directory.id
bundle_id = data.aws_workspaces_bundle.standard_linux.id
# Admin is the Administrator of the AWS Directory Service
user_name = "Admin"
root_volume_encryption_enabled = true
user_volume_encryption_enabled = true
volume_encryption_key = aws_kms_key.workspaces-kms.arn
workspace_properties {
compute_type_name = "STANDARD"
user_volume_size_gib = 50
root_volume_size_gib = 80
running_mode = "AUTO_STOP"
running_mode_auto_stop_timeout_in_minutes = 60
}
tags = {
Name = "demo-workspaces"
Environment = "dev"
}
depends_on = [
aws_iam_role.workspaces-default,
aws_workspaces_directory.workspaces-directory
]
}
Here, compute_type_name is the compute bundle type.
Step 12: Execute the terraform command to initialize the working Directory
Terraform init

Step 13: Apply the changes:
First, run the plan command. This command will show the plan for deploying the service:
Terraform plan

Now we can apply this script:
Terraform apply -auto-approve

Step 14: Verify the resource
As we have defined directory and workspace. Now you can verify the service on AWS such as below:


Step 15: Destroy all resources
To destroy all the resources, We just need to run the below command:
terraform destory -auto-approve

Conclusion:
In this blog, We have learned how to deploy AWS Workspace using terraform. If you find this blog helpful then like and share it with your friends.