<img height="1" width="1" style="display:none;" alt="" src="https://px.ads.linkedin.com/collect/?pid=4958233&amp;fmt=gif">
 
RSS Feed

Architecture | Vlad Cenan |
25 February 2019

Infrastructure as Code with Terraform | The WHY and the HOW TO

THE WHY

Let’s start by understanding better what Terraform is and why we’ve chosen this solution in our project.

Over the last few years, we’ve started to use Terraform on our projects at Endava and in this article we’ll explain why we find it to be a great tool for cloud infrastructure management and provide some pointers to help you get started with it yourself. Let’s start by understanding what Terraform is and why we’ve chosen to use it in our projects. Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently. Terraform can manage existing and popular service providers as well as custom in-house solutions. It also manages infrastructure including low-level components such as compute instances, storage, and networking, as well as high-level components such as DNS entries, SaaS features, etc. In the big picture, Terraform stands behind the idea of IAC (Infrastructure As Code), where you treat all the operations as software in order to create, update and deploy your infrastructure.

To solve our automation needs for building infrastructure, we started looking at the options available for provisioning/orchestration and configuration management tools.

Unlike cloud-specific tools, like AWS’ Cloud Formation, that can only be used with a specific cloud provider, Terraform acts as an abstraction layer on top of a cloud platform and so can manage infrastructure across a range of cloud providers.

In Terraform we found many advantages that fit our needs for automating infrastructure. Here are some of the key strong points:

Open source

Planning phase (dry run – since we specify the end-state we can see the actions actually performed)

Simple syntax (HCL or JSON)

Parallelism (Terraform will build all these resources across all these providers in parallel)

Multiple providers (cloud platforms)

Cloud agnostic (allows you to automate infrastructure stacks from multiple cloud service providers simultaneously and integrate other third-party services)

Well-documented

Immutable infrastructure (once deployed you can change it by replacing it for stability and efficiency)

Able to write Terraform plugins to add new advanced functionality to the platform

Avoid ad-hoc scripts & non-idempotent code

Rather than individual infrastructure resources, Terraform focuses on a higher-level abstraction of the data centre and its associated services, and is very powerful when combined with a configuration management tool Chef or Ansible. It would be ideal to be able to write infrastructure with one tool, but each has its own strengths and they complement each other well. Terraform has over 60 providers and the AWS provider has over 90 resources, for example. Using Terraform and Chef together could solve the complicated problems in providing full infrastructures. In our project we used both to manage the immutable infrastructure for a web application, shown in the images below.

Immutable infrastructure diagram (demo)
Fig.1 Immutable infrastructure diagram (demo)

Infrastructure as Code with Terraform
Fig.2 Project Structure

THE HOW

After seeing what Terraform is and the advantages of using it, let’s see how simple it is to start using it.

Terraform code is written in HCL (Harshicorp Configuration Language) with ".tf" files extension where your goal is to describe the infrastructure you want.

The list of providers for terraform can be found here https://www.terraform.io/docs/providers/. Providers are cloud platforms and can be accessed by adding to a main.tf file the following:

	provider "aws" {
		region = "us-east-1"
	}

 

This means you will use the AWS provider and deploy in us-east-1 region. For each provider there are different kinds of resources. Credentials that will allow creating and destroying resources can be provided here inside the provider or alternatively the tool will use the default credentials in the ‘~.aws/credentials' file.

By adding the following to main.tf you will deploy an ec2 instance named example in your region:

	resource "aws_instance" "example" {
		ami = "ami-2d39803a"
		instance_type = "t2.micro"
	}

 

Having installed Terraform, to try it out for yourself, type the following commands in the terminal where you created your HCL file:

~]$ terraform init (will prepare working directory to use. It safe to run multiple times to update the working directory configuration)

~]$ terraform plan (will be a dry run to see what terraform will do before running it. The output will be visible with + for resources created and with - resources that are going to be deleted, ~ modified resources)

~]$ terraform apply (will apply the plan and release it)

In your main.tf file, change the instance type value from "t2-micro" to "t2.medium" and run the "terraform apply" command again to see how easy it is to modify infrastructure with Terraform. The prefix -/+ means that Terraform will destroy and recreate the resource. While some attributes can be updated in-place (shown with ~), changing the instance_type or the ami for an EC2 instance requires recreating it. Terraform handles these details for you, and the execution plan makes it clear what Terraform will do.

	-/+ aws_instance.realdoc_vm (new resource required)
	instance_type: "t2.micro" => "t2.medium"

 

Once again, Terraform prompts for approval of the execution plan before proceeding. Answer yes to execute the planned steps. As indicated by the execution plan, Terraform first destroyed the existing instance and then created a new one in its place. You can use terraform show again to see the new values associated with this instance, and the destroy command for tear-down infrastructure:

	~]$ terraform destroy

 

The greatest advantage of using Terraform is automating the provisioning of new servers and other resources. This both saves time and reduces the possibility of human error.

Using Terraform to specify infrastructure as code has been a huge productivity boost for us. We can create deployments for new customers much more quickly and with more consistency than before, and I strongly recommend it.

Vlad Cenan

DevOps Engineer

Vlad is a DevOps engineer with close to a decade of experience across release and systems engineering. He loves Linux, open source, sharing his knowledge and using his troubleshooting superpowers to drive organisational development and system optimisation. And running, he loves running too.

 

From This Author

  • 10 December 2019

    AWS Serverless with Terraform – Best Practices

 

Archive

  • 13 November 2023

    Delving Deeper Into Generative AI: Unlocking Benefits and Opportunities

  • 07 November 2023

    Retrieval Augmented Generation: Combining LLMs, Task-chaining and Vector Databases

  • 19 September 2023

    The Rise of Vector Databases

  • 27 July 2023

    Large Language Models Automating the Enterprise – Part 2

  • 20 July 2023

    Large Language Models Automating the Enterprise – Part 1

  • 11 July 2023

    Boost Your Game’s Success with Tools – Part 2

  • 04 July 2023

    Boost Your Game’s Success with Tools – Part 1

  • 01 June 2023

    Challenges for Adopting AI Systems in Software Development

  • 07 March 2023

    Will AI Transform Even The Most Creative Professions?

  • 14 February 2023

    Generative AI: Technology of Tomorrow, Today

  • 25 January 2023

    The Joy and Challenge of being a Video Game Tester

  • 14 November 2022

    Can Software Really Be Green

  • 26 July 2022

    Is Data Mesh Going to Replace Centralised Repositories?

  • 09 June 2022

    A Spatial Analysis of the Covid-19 Infection and Its Determinants

  • 17 May 2022

    An R&D Project on AI in 3D Asset Creation for Games

  • 07 February 2022

    Using Two Cloud Vendors Side by Side – a Survey of Cost and Effort

  • 25 January 2022

    Scalable Microservices Architecture with .NET Made Easy – a Tutorial

  • 04 January 2022

    Create Production-Ready, Automated Deliverables Using a Build Pipeline for Games – Part 2

  • 23 November 2021

    How User Experience Design is Increasing ROI

  • 16 November 2021

    Create Production-Ready, Automated Deliverables Using a Build Pipeline for Games – Part 1

  • 19 October 2021

    A Basic Setup for Mass-Testing a Multiplayer Online Board Game

  • 24 August 2021

    EHR to HL7 FHIR Integration: The Software Developer’s Guide – Part 3

  • 20 July 2021

    EHR to HL7 FHIR Integration: The Software Developer’s Guide – Part 2

  • 29 June 2021

    EHR to HL7 FHIR Integration: The Software Developer’s Guide – Part 1

  • 08 June 2021

    Elasticsearch and Apache Lucene: Fundamentals Behind the Relevance Score

  • 27 May 2021

    Endava at NASA’s 2020 Space Apps Challenge

  • 27 January 2021

    Following the Patterns – The Rise of Neo4j and Graph Databases

  • 12 January 2021

    Data is Everything

  • 05 January 2021

    Distributed Agile – Closing the Gap Between the Product Owner and the Team – Part 3

  • 02 December 2020

    8 Tips for Sharing Technical Knowledge – Part 2

  • 12 November 2020

    8 Tips for Sharing Technical Knowledge – Part 1

  • 30 October 2020

    API Management

  • 22 September 2020

    Distributed Agile – Closing the Gap Between the Product Owner and the Team – Part 2

  • 25 August 2020

    Cloud Maturity Level: IaaS vs PaaS and SaaS – Part 2

  • 18 August 2020

    Cloud Maturity Level: IaaS vs PaaS and SaaS – Part 1

  • 08 July 2020

    A Virtual Hackathon Together with Microsoft

  • 30 June 2020

    Distributed safe PI planning

  • 09 June 2020

    The Twisted Concept of Securing Kubernetes Clusters – Part 2

  • 15 May 2020

    Performance and security testing shifting left

  • 30 April 2020

    AR & ML deployment in the wild – a story about friendly animals

  • 16 April 2020

    Cucumber: Automation Framework or Collaboration Tool?

  • 25 February 2020

    Challenges in creating relevant test data without using personally identifiable information

  • 04 January 2020

    Service Meshes – from Kubernetes service management to universal compute fabric

  • 10 December 2019

    AWS Serverless with Terraform – Best Practices

  • 05 November 2019

    The Twisted Concept of Securing Kubernetes Clusters

  • 01 October 2019

    Cognitive Computing Using Cloud-Based Resources II

  • 17 September 2019

    Cognitive Computing Using Cloud-Based Resources

  • 03 September 2019

    Creating A Visual Culture

  • 20 August 2019

    Extracting Data from Images in Presentations

  • 06 August 2019

    Evaluating the current testing trends

  • 23 July 2019

    11 Things I wish I knew before working with Terraform – part 2

  • 12 July 2019

    The Rising Cost of Poor Software Security

  • 09 July 2019

    Developing your Product Owner mindset

  • 25 June 2019

    11 Things I wish I knew before working with Terraform – part 1

  • 30 May 2019

    Microservices and Serverless Computing

  • 14 May 2019

    Edge Services

  • 30 April 2019

    Kubernetes Design Principles Part 1

  • 09 April 2019

    Keeping Up With The Norm In An Era Of Software Defined Everything

  • 25 February 2019

    Infrastructure as Code with Terraform

  • 11 February 2019

    Distributed Agile – Closing the Gap Between the Product Owner and the Team

  • 28 January 2019

    Internet Scale Architecture

OLDER POSTS