Automating the website deployment on AWS using Terraform

In this agile world , there is a need to automate every technology for faster deployment and for faster reach to its customers. Nowadays, automating websites and app deployment are common phenomenon for many companies for their faster and global reach to their customers using various automation tools. So in this article I will tell you how to automate your website deployment with the help of terraform on AWS. Before going to hands on part I will first explain you what services of aws and terraform we will be using.

What is AWS ?

Amazon Web Services (AWS) is a secure cloud services platform, offering compute power, database storage, content delivery and other functionality to help businesses scale and grow. Running web and application servers in the cloud to host dynamic websites are other services of AWS.

Some of the AWS services that I will be using:

1) Amazon EC2

Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud. We can get all the computation resources like RAM, CPU , harddisk , OS etc from this service.

2) Amazon EBS

Amazon Elastic Block Store (EBS) is an easy to use, high performance block storage service designed for use with Amazon Elastic Compute Cloud (EC2). EBS is a subservice of EC2 . Block storage is a ephemeral storage which can be used for creating partitions and an operating system can be installed with the help of block storage.

3) Amazon S3

Amazon Simple Storage Service(S3) is storage for the Internet. It is designed to make web-scale computing easier for developers. Amazon S3 has a simple web services interface that we can use to store and retrieve any amount of data, at any time, from anywhere on the web. S3 is a object storage usually used for storing data permanently. S3 storage is just like a pendrive which can be attached to any EC2 instance(operating system) running in the world.

4) Amazon CloudFront

Amazon CloudFront is a fast content delivery network (CDN) service that securely delivers data, videos, applications, and APIs to customers globally with low latency, high transfer speeds, all within a developer-friendly environment.

To reduce data latency , static data like images and videos of a website can be accessed through a edge location which is nearer to the customer rather than the accessing from the origin data center.

Amazon edge locations

Still there are lot of amazon services that we can use but I will be using these four services for the my website deployment.

What is Terraform ?

In today’s world , a lot of companies uses multiple clouds for their production i.e. they use some services from aws and some services from gcp or they use some services from private clouds like openstack. So basically they use multiple clouds for one single project. But the problem is that every cloud has their own different cli commands or sdk for accessing their services. So it becomes tough for the cloud engineers to learn all the cli commands of different clouds. And this makes the requirement for a standard tool which can be used for all the clouds. And this tool is known as Terraform.

Terraform make use of Hashicorp Configuration language for accessing all the different types of cloud , infact many different services like kubernetes , mysql and many more can be accessed with the help of Terraform.

Pre -Requisites

  1. Terraform Should be installed. If it is not installed you can install it from here.
  2. You should have an aws account.

So let’s directly jump to the hands on part.

Steps for automating the website deployment

  1. To create the key and security group for the EC2 instance which allow the port 80.
  2. Launch EC2 instance.
  3. In this Ec2 instance use the key and security group which we have created in step 1.
  4. To Launch one Volume (EBS) and mount that volume into /var/www/html
  5. Developer have uploded the code into github repo also the repo has some images.
  6. To copy the github repo code into /var/www/html
  7. To create S3 bucket, and copy/deploy the images from github repo into the s3 bucket and change the permission to public readable.
  8. To create a Cloudfront using s3 bucket(which contains images) and use the Cloudfront URL to update in code in /var/www/html

Step1:

Creating a Key and Security group which allow port 80

First create a notepad file with .tf extension. And then write the below code for creating a key.

provider “aws” {
region = “ap-south-1”
profile = “mymilind”
}
resource “tls_private_key” “example” {
algorithm = “RSA”
rsa_bits = 4096
}
resource “aws_key_pair” “deployer” {
key_name = “deployer-key1”
public_key = “${tls_private_key.example.public_key_openssh}”
}

The “aws” provider code is used for logging into your aws account. Resource “tls_private_key” is used for generating a key with the algorithm named as “RSA”

resource “aws_security_group” “allow_tls” {
name = “allow_tls”
description = “Allow tls inbound traffic”
vpc_id = “vpc-f6869b9e”
ingress {
description = “tls from VPC”
from_port = 80
to_port = 80
protocol = “TCP”
cidr_blocks = [“0.0.0.0/0”]
}
ingress {
description = “TLS from VPC”
from_port = 22
to_port = 22
protocol = “Tcp”
cidr_blocks = [“0.0.0.0/0”]
}
egress {
from_port = 0
to_port = 0
protocol = “-1”
cidr_blocks = [“0.0.0.0/0”]
}
tags = {
Name = “allow_tls”
}
}

In aws Security group is a kind of firewall where we can define all the types of protocol. I will be launching website so that’s why I have opened port 80.

Security group named as “allow-tls “ created

Step2:

Launching EC2 instance

resource “aws_instance” “web” {
depends_on = [aws_key_pair.deployer,aws_security_group.allow_tls,]
ami = “ami-0447a12f28fddb066”
instance_type = “t2.micro”
key_name = “deployer-key1”
security_groups = [“allow_tls”]
connection {
type = “ssh”
user = “ec2-user”
private_key = tls_private_key.example.private_key_pem
host = aws_instance.web.public_ip
}
provisioner “remote-exec” {
inline = [
“sudo yum install httpd php git -y”,
“sudo systemctl restart httpd”,
“sudo systemctl enable httpd”,
]
}
tags = {
Name = “WEBSERVER”
}
}

Here I am launching the EC2 instance using “aws_instance” resource. I have assigned the key and security group which we will be using and also install some important tools and softwares like php , httpd and git for hosting website using a provisioner command. Provisioner command is used to run commands inside an operating system.

EC2 instance named as WEBSERVER created

Step3

Launching a EBS volume and mounting it with /var/www/html folder

resource “aws_ebs_volume” “ebs” {
availability_zone = aws_instance.web.availability_zone
size = 1
tags = {
Name = “storage”
}
}
resource “aws_volume_attachment” “ebs_att” {
device_name = “/dev/sdh”
volume_id = “${aws_ebs_volume.ebs.id}”
instance_id = “${aws_instance.web.id}”
force_detach = true
}
resource “null_resource” “nullremote1” {depends_on = [
aws_volume_attachment.ebs_att,aws_instance.web,
]
connection {
type = “ssh”
user = “ec2-user”
private_key = tls_private_key.example.private_key_pem
host = aws_instance.web.public_ip
}
provisioner “remote-exec” {
inline = [
“sudo mkfs.ext4 /dev/xvdh”,
“sudo mount /dev/xvdh /var/www/html”,
“sudo rm -rf /var/www/html/*”,
“sudo git clone https://github.com/MilindRastogi/Terraform_webserver_automation.git /var/www/html/”
]
}
}

EBS volume is created with the help of “aws_ebs_volume” resource. Also I have mounted the volume with the file location as /var/www/html. For hosting a website it is always required to deploy our code inside /var/www/html folder.

EBS named as “storage” created

Step4

Launching a S3 bucket and uploading static data like images from github

resource “aws_s3_bucket” “b” {
bucket = “milind2000”
acl = “public-read”
tags = {
Name = “My bucket”
Environment = “Dev”
}
}
resource “null_resource” “nulllocal1” {
provisioner “local-exec” {
command = “curl -O https://raw.githubusercontent.com/MilindRastogi/Terraform_webserver_automation/master/Terraform-main-image.jpg"
}
}
resource “aws_s3_bucket_object” “object” {
depends_on = [aws_s3_bucket.b,]
bucket = “milind2000”
key = “teraimage.jpg”
source = “Terraform-main-image.jpg”
acl = “public-read”
}
locals {
s3_origin_id = “S3-milind2000”
}

S3 bucket can be created with the help of “aws_s3_bucket” resource. And for uploading the image in bucket I have first download it from github in my local system and then for uploading the Image I have assigned the image location to key variable. And don’t forget that your bucket name should be unique. And one more thing I have change the bucket data accessibility to public.

S3 bucket named as “milind2000” created

Step 5

resource “aws_cloudfront_distribution” “s3_distribution” {
depends_on = [aws_instance.web,aws_s3_bucket_object.object,]
origin {
domain_name = “milind2000.s3.amazonaws.com”
origin_id = “${local.s3_origin_id}”
}
enabled = true
is_ipv6_enabled = true
comment = “Some comment”
default_root_object = “teraimage.jpg”
default_cache_behavior {
allowed_methods = [“DELETE”, “GET”, “HEAD”, “OPTIONS”, “PATCH”, “POST”, “PUT”]
cached_methods = [“GET”, “HEAD”]
target_origin_id = “${local.s3_origin_id}”
forwarded_values {
query_string = false
cookies {
forward = “none”
}
}
viewer_protocol_policy = “allow-all”
min_ttl = 0
default_ttl = 3600
max_ttl = 86400
}
# Cache behavior with precedence 0
ordered_cache_behavior {
path_pattern = “/content/immutable/*”
allowed_methods = [“GET”, “HEAD”, “OPTIONS”]
cached_methods = [“GET”, “HEAD”, “OPTIONS”]
target_origin_id = “${local.s3_origin_id}”
forwarded_values {
query_string = false
headers = [“Origin”]
cookies {
forward = “none”
}
}
min_ttl = 0
default_ttl = 86400
max_ttl = 31536000
compress = true
viewer_protocol_policy = “redirect-to-https”
}
price_class = “PriceClass_200”restrictions {
geo_restriction {
restriction_type = “none”
}
}
tags = {
Environment = “production”
}
viewer_certificate {
cloudfront_default_certificate = true
}
}

For using cloudfront service use the “aws_cloudfront_distribution” resource. Also assign the file on which you want to use cloudfront. After the cloudfront is created a cloudfront URL will be created which can be used in the html file.

cloudfront named as “WEB” created

Step 6

Updating the website code with cloudfront url

resource “null_resource” “cloudfront_url”{
depends_on = [aws_cloudfront_distribution.s3_distribution,]
connection {
type = “ssh”
user = “ec2-user”
private_key = tls_private_key.example.private_key_pem
host = aws_instance.web.public_ip
}
provisioner “remote-exec” {
inline = [
“sudo sed -i ‘$ a <img src= ‘https://${aws_cloudfront_distribution.s3_distribution.domain_name}/${aws_s3_bucket_object.object.key}' height = ‘400’ width=’400'>’ /var/www/html/index.html”,
“sudo systemctl restart httpd”,
]
}
}

The above code helps in logging into our EC2 instance for updating the html file. The sed command is used for remotely editing any type of file.

Finally !! all the coding part is done and now move on to executing our code.

Executing the code

For executing the code make sure you have make the terraform file under one seperate folder. And after saving the code now execute the below commands in your command prompt or terminal.

terraform init

This command is used for installing all the plugins of terraform. It is because of the plugins that terraform can be used for accesing any coud.

using terraform init

terraform apply -auto-approve

This command is used for executing the terraform code. After executing this command an end to end website deployment will be done.

using terraform apply -auto-approve

So yes finally our website is deployed !! Here is a glimpse of my website

Note that I have opened my website with the help of EC2 instance ip address. You can also access the website using Public DNS (IPv4) of the instance.

Deleting your infrastructure

For deleting the whole setup we have to use terraform destroy -auto-approve command.

using terraform destroy -auto-approve

For knowing more about the terraform you can check the official docs

And here is my github code for further reference

I hope you liked my article.

Thank you for reading !!