Automating AWS Infrastructure Setup with Terraform

Automating AWS Infrastructure Setup with Terraform

by Chandrakant Rai

July 4, 2017

In this blog post we will see how to setup a simple AWS infrastructure using Terraform. Any company moving their infrastructure to a Cloud provider, would like to setup their infrastructure as code which would give them immutable infrastructure and allow them to use deployment strategies like Canary Deployment or Blue-Green Deployment which are more apt strategies for deployment on Cloud. Terraform is a tool by HashiCorp which helps in achieving this goal. The same setup could be automated using Python & native AWS Boto library but Terraform has no cloud provider vendor lock in and could be easily used for different cloud provider like Azure, Google Cloud, Rackspace etc if any organization has multi-cloud provider setup.

This is a very simple AWS setup in which we will

  • Spin up AWS EC2 instances
  • Create Buckets in AWS S3 service and use this S3 bucket for shared terraform state storage

Pre-requisites

  1. Ensure you have an AWS account setup
  2. Ensure that you have an access key and secret key
  3.  

Terraform Installation

Terraform is distributed as a zip archive that supports multiple platforms and architectures. We will be installing the Windows 64-bit package and unpack it under C:\terraform folder. Ensure the single “terraform” executable is present in the folder. Add the folder to PATH variable and verify the installation by opening a new terminal session and typing “terraform”wp 1

 

 

 

 

 

 

 

 

 

 

 
 

AWS Setup

For the pre-requisite setup, we have created a user “aws_terraform” with admin privileges and got the access key id and secret key for that user.

As first step create aws.tf file (Terraform resource template file) under “C:\terraform\AWS” folder with following details. The ami we used below is for Red Hat Enterprise Linux 7.3 provided on AWS marketplace and the instance type we are going to spin up is a t2.micro instance. For the access_key and secret_key use the values of the user created above.

wp2

 

 

 

 

 

 

 

Validate the terraform file by running “terraform plan” command. Once we are satisfied with the output we can use “terraform apply” command to create the aws resource and validate in aws dashboard that ec2 instance of t2.micro using RHEL ami image have been created.

wp3

 

 

 

 

 

 

 

 

 

 

 

 

 

Output of “terraform apply” command:

wp4

 

 

 

 

 

 

 

 

 

 

 

 

Also validate in AWS EC2 dashboard that the instance has been create and it is up and running.

wp5

To add a name to your instance, modify your aws.tf file and add tag to EC2 instance and run the “terraform apply” command. Note that the EC2 instance now have name added.

wp6

 

 

 

 

 

 

 

 

 

wp7

If you are following Canary deployment practice in your Cloud setup, you could easily spin up new infrastructure and application stack from the terraform module and redirect traffic to new infrastructure and application stack setup via AWS LB and destroy the old setup by running “terraform destroy” command. The old instance would be in terminated state after running the destroy command. For example, below we spin up new instance version 2.0 and destroyed old instance.

wp8

If you note the output of the “terraform apply” command above, it says “The state of your infrastructure has been saved.” It generates a “terraform.tfstate” which store the state/record of our AWS infrastructure. It is a JSON format file with record of our current infrastructure created via Terraform. If it is a demo terraform project storing this state file on our local drive is fine, but if you are going to use Terraform for a real setup then state files should be stored on shared storage, the recommended shared storage is AWS S3. Below we will show how to configure AWS S3 bucket and which can be used to store Terraform state files (This remote state storage feature is part of Terraform Enterprise).

To create S3 bucket we have modified aws.tf file to add a “aws_s3_bucket” resource and run the same “terraform plan” and “terraform apply” command. Modified aws.tf file is shown below. Once the S3 bucket is created we can configure terraform (only if enterprise version is used) to use it as remote storage for its state files. We have also enabled versioning on the S3 bucket, as shown below in the aws.tf file (Note: S3 bucket names have to be unique and follow the AWS convention).

wp9

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Validate in AWS console that the bucket has been created. Once validate we can configure terraform to use it to store state files. We will show the command which can be used to configure terraform (But can’t show the result as we are currently using community version of Terraform)

 

wp10

To configure this bucket to be used as remote state storage for Terraform Enterprise, run the command below and state files will show up under that S3 bucket as shown below (In our case we have manually updated the state file to S3 bucket).

wp11

 

 

 

 

 

wp12

This was a short introduction to terraform and how to use it to automate aws infrastructure setup, if you would like to read up more about this nifty tool, gruntwork blog (mentioned in references section) has detailed articles written by Yevgeniy Brikman who is the author of the book “Terraform: Up & Running”, also the terraform documentation has detailed documentation for each Cloud provider and all the parameters which could be used in terraform resource file.
 

References

https://www.terraform.io/docs/

https://blog.gruntwork.io