Architecture
| Vlad Cenan |
25 February 2019
Infrastructure as Code with Terraform | The WHY and the HOW TO
THE WHY
Let’s start by understanding better what Terraform is and why we’ve chosen this solution in our project.
Over the last few years, we’ve started to use Terraform on our projects at Endava and in this article we’ll explain why we find it to be a great tool for cloud infrastructure management and provide some pointers to help you get started with it yourself. Let’s start by understanding what Terraform is and why we’ve chosen to use it in our projects. Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently. Terraform can manage existing and popular service providers as well as custom in-house solutions. It also manages infrastructure including low-level components such as compute instances, storage, and networking, as well as high-level components such as DNS entries, SaaS features, etc. In the big picture, Terraform stands behind the idea of IAC (Infrastructure As Code), where you treat all the operations as software in order to create, update and deploy your infrastructure.
To solve our automation needs for building infrastructure, we started looking at the options available for provisioning/orchestration and configuration management tools.
Unlike cloud-specific tools, like AWS’ Cloud Formation, that can only be used with a specific cloud provider, Terraform acts as an abstraction layer on top of a cloud platform and so can manage infrastructure across a range of cloud providers.
In Terraform we found many advantages that fit our needs for automating infrastructure. Here are some of the key strong points:
■ Open source
■ Planning phase (dry run – since we specify the end-state we can see the actions actually performed)
■ Simple syntax (HCL or JSON)
■ Parallelism (Terraform will build all these resources across all these providers in parallel)
■ Multiple providers (cloud platforms)
■ Cloud agnostic (allows you to automate infrastructure stacks from multiple cloud service providers simultaneously and integrate other third-party services)
■ Well-documented
■ Immutable infrastructure (once deployed you can change it by replacing it for stability and efficiency)
■ Able to write Terraform plugins to add new advanced functionality to the platform
■ Avoid ad-hoc scripts & non-idempotent code
Rather than individual infrastructure resources, Terraform focuses on a higher-level abstraction of the data centre and its associated services, and is very powerful when combined with a configuration management tool Chef or Ansible. It would be ideal to be able to write infrastructure with one tool, but each has its own strengths and they complement each other well. Terraform has over 60 providers and the AWS provider has over 90 resources, for example. Using Terraform and Chef together could solve the complicated problems in providing full infrastructures. In our project we used both to manage the immutable infrastructure for a web application, shown in the images below.
Fig.1 Immutable infrastructure diagram (demo)
THE HOW
After seeing what Terraform is and the advantages of using it, let’s see how simple it is to start using it.
Terraform code is written in HCL (Harshicorp Configuration Language) with ".tf" files extension where your goal is to describe the infrastructure you want.
The list of providers for terraform can be found here https://www.terraform.io/docs/providers/. Providers are cloud platforms and can be accessed by adding to a main.tf file the following:
provider "aws" { region = "us-east-1" }
This means you will use the AWS provider and deploy in us-east-1 region. For each provider there are different kinds of resources. Credentials that will allow creating and destroying resources can be provided here inside the provider or alternatively the tool will use the default credentials in the ‘~.aws/credentials' file.
By adding the following to main.tf you will deploy an ec2 instance named example in your region:
resource "aws_instance" "example" { ami = "ami-2d39803a" instance_type = "t2.micro" }
Having installed Terraform, to try it out for yourself, type the following commands in the terminal where you created your HCL file:
~]$ terraform init (will prepare working directory to use. It safe to run multiple times to update the working directory configuration)
~]$ terraform plan (will be a dry run to see what terraform will do before running it. The output will be visible with + for resources created and with - resources that are going to be deleted, ~ modified resources)
~]$ terraform apply (will apply the plan and release it)
In your main.tf file, change the instance type value from "t2-micro" to "t2.medium" and run the "terraform apply" command again to see how easy it is to modify infrastructure with Terraform. The prefix -/+ means that Terraform will destroy and recreate the resource. While some attributes can be updated in-place (shown with ~), changing the instance_type or the ami for an EC2 instance requires recreating it. Terraform handles these details for you, and the execution plan makes it clear what Terraform will do.
-/+ aws_instance.realdoc_vm (new resource required) instance_type: "t2.micro" => "t2.medium"
Once again, Terraform prompts for approval of the execution plan before proceeding. Answer yes to execute the planned steps. As indicated by the execution plan, Terraform first destroyed the existing instance and then created a new one in its place. You can use terraform show again to see the new values associated with this instance, and the destroy command for tear-down infrastructure:
~]$ terraform destroy
The greatest advantage of using Terraform is automating the provisioning of new servers and other resources. This both saves time and reduces the possibility of human error.
Using Terraform to specify infrastructure as code has been a huge productivity boost for us. We can create deployments for new customers much more quickly and with more consistency than before, and I strongly recommend it.
Vlad Cenan
DevOps Engineer
Vlad is a DevOps engineer with close to a decade of experience across release and systems engineering. He loves Linux, open source, sharing his knowledge and using his troubleshooting superpowers to drive organisational development and system optimisation. And running, he loves running too.All Categories
Related Articles
-
07 February 2022
Using Two Cloud Vendors Side by Side – a Survey of Cost and Effort
-
25 January 2022
Scalable Microservices Architecture with .NET Made Easy – a Tutorial
-
24 August 2021
EHR to HL7 FHIR Integration: The Software Developer’s Guide – Part 3
-
20 July 2021
EHR to HL7 FHIR Integration: The Software Developer’s Guide – Part 2
-
29 June 2021
EHR to HL7 FHIR Integration: The Software Developer’s Guide – Part 1