<img height="1" width="1" style="display:none;" alt="" src="https://px.ads.linkedin.com/collect/?pid=4958233&amp;fmt=gif">
RSS Feed

Architecture | Julian Alarcon |
23 July 2019

This post has been created for people who plan to start using Terraform on a project, in the hope it may help to save some time by sharing some of my lessons learned. And yes, the title is true – I wish I had known most of these lessons before starting to work with Terraform. I have split 11 lessons across two posts – here is part 2.


Outputs show the information needed after Terraform templates are deployed. They are also used within modules to export information.

output "instance_id" {
value = "${aws_instance.instance_ws.id}"

When used within modules, two outputs must be defined, one in the module and a similar one in the configuration files. These outputs need to be explicitly defined. Output information is stored in a Terraform state file and can be queried by other terraform templates.

It's recommended to define Outputs for resources even if you are not using them at the time. Check the resource and the outputs provided by the resource and choose wisely which information will be useful for your infrastructure when you are using this Terraform resource. By doing so, you will decrease the need to go back and edit your module and your resource because an output is required by a new resource that you are defining.

Also, as you may want to organize your files, you can save the outputs files in a specific file called outputs.tf.


When you use Terraform there is a tendency to start by focusing on the larger components, meaning that sometimes you miss smaller components that can cause frustration and technical debt. During the creation of components, Terraform will use the default options of the provider that you are using, if these are not predefined. It's important to acknowledge the default components in use and define them in Terraform, as it's possible that you need them in the future and default options may be modified by your chosen providers with no notice, resulting in two different component sets or changes in the properties of the components.

Examples of these include the Route Tables, which are sometimes not a focus area at the beginning of a project, or Elastic Container Repositories, which are easy to define but not always top of mind.

resource "aws_ecr_repository" "repository" {
name = "name_of_repo"


The interpolation syntax is a powerful feature which allows you to reference variables, resource attributes, call functions, etc.

String variables -> ${var.foo}
Map variables -> ${var.amis["us-east-1"]}
List variables -> ${var.subnets[idx]}

When you need to retrieve data from a module output or from the state of particular resources, you can use the module. or the data. syntax to call the desired attributes.

# Getting information from a module
output "my_module_bar_value_from_module" {
value = ${module.my_module.bar}

# Getting information from a data source
resource "aws_instance" "web" {
ami = "${data.aws_ami.my_amy.id}"
instance_type = "t3.micro"

You can also use some arithmetic or logical operations with interpolation. In this snip of code, if the evaluation of var.something is true (1, true) the VPN resource will be included.:

resource "aws_instance" "vpn" {
count = "${var.something ? 1 : 0}"

You can find more information about the supported Interpolations in the Terraform documentation


Workspaces in Terraform

Initially, these were known in Terraform version 0.9 as Environments, but since version 0.10 onwards, Terraform has renamed this feature to Workspaces.

It is possible to define new workspaces, change workspaces or delete workspaces using the terraform workspace command.

$ terraform workspace -h
Usage: terraform workspace

Create, change and delete Terraform workspaces.

delete Delete a workspace
list List Workspaces
new Create a new workspace
select Select a workspace
show Show the name of the current workspace

There are a couple of advantages to using Workspaces:

1. They are defined by Hashicorp, so it's possible that improved features could be developed in the future
2. They reduce the usage of code

However, Workspaces also present a couple of challenges:

1. They are still an early implementation
2. They are not yet supported by all backends
3. It is not clear at the time of deployment (terraform apply) which workspace will be used (terraform workspace show)

Folders structure

One simple and useful option is to define components inside folders by environment.

├── dev
├── clusters
└── ecs_cluster
├── service01
├── main.tf
├── outputs.tf
├── variables.tf
├── service02
├── service03
├── service04
├── database
├── main.tf
├── outputs.tf
└── variables.tf
├── elasticsearch
├── main.tf
├── outputs.tf
└── variables.tf
└── vpc
├── main.tf
├── outputs.tf
└── variables.tf
├── global
└── web_login
├── main.tf
├── outputs.tf
└── variables.tf
└── terraform_state
├── output.tf
├── variables.tf
└── main.tf
├── prod
├── clusters
├── …
├── qa
├── clusters
├── …
└── README.md


1. Clear definition of the environment being deployed (in the folder path)
2. Most commonly used and fail-proof option for public deployments
3. Terraform States can be defined for each environment folder with no issues
4. Specify the name of the outputs for each environment


1. Duplicated code
2. An overwhelming number of folders for larger projects
3. Copying of code and replacing of core values is always needed

So, which one should you choose? The folder structure by environments is easier and simple to use and it's a current recommended way to split the different components of your infrastructure.


The terraform command has multiple options, so I wanted to share this as a recommended workflow at the moment of deployment:

1. Download the modules and force the update

The command terraform init initialises the workspace, downloading the providers, modules and setting up the terraform state backend. But if a module is already downloaded, Terraform won't recognise that a new version of a module is available. With terraform get it is possible to download the modules, but it's recommended to use the -update option to force an update.

terraform get -update

2. Once that you have the latest modules, is necessary to initialise the Terraform workspace to start downloading the providers and modules (already completed through the first command) and initialise the terraform state backend

terraform init
It's possible to also use the -upgrade option to force the update of providers, plugins and modules.

3. Prior deployment, Terraform is able to define a plan for deployment (what will be created, modified or destroyed). This plan is useful to use in a Pipeline to check the changes before initialising the real deployment.

It's important to note that the plan option is not always 1:1 with the deployment. If a component is already deployed, or if there are not enough permissions provided, the plan will pass but the deployment could fail.

terraform plan

To simplify things, it's possible to run all in one bash line: terraform get -update && terraform init && terraform plan

4. When you reach the final deployment step, Terraform will create a plan and provide the option to respond with yes or no to deploy the desired architecture.

terraform apply

Terraform can't rollback after deployment. So, if an error appears in the deployment, the issue should be solved in that moment Also, is possible to destroy the deployment (terraform destroy), but it will destroy everything and not rollback the changes.

It's possible to specify the application of a specific change or to destroy a specific resource with the option -target.


The use of data sources allows a Terraform configuration to build on information defined outside of Terraform, or defined by another separate Terraform configuration, including:

Data from a remote state, this is useful to call states from another terraform deployments>
data "terraform_remote_state" "vpc_state" {
backend = "s3"
config {
bucket = "ci-cd-terraform-state"
key = "vpc/terraform.tfstate"
region = "us-east-1"

Data from AWS or external systems>
data "aws_ami" "linux" {
most_recent = true

filter {
name = "name"
values = ["amzn2-ami-hvm-2.0.20180810-x86_64-gp2*"]

filter {
name = "virtualization-type"
values = ["hvm"]

owners = ["137112412989"]



To test your code, you can use a great tool called Terratest. This tool uses Go's unit testing framework.

Terraform version 0.12

The next Terraform version was announced a few months ago and will improve some interpolation and bring some changes to the HCL language.

Julian Alarcon

DevOps Engineer

Julian is a DevOps engineer who loves open source software culture, sharing his knowledge and working on weird and wonderful projects that get him to think outside the box. He also enjoys learning from new cultures and tries to experience one new thing every day. With a technical background spanning almost 10 years, Julian and his team help bring big ideas to life. He is also a coffee lover from a coffee country, an amateur photographer and a great conversationalist, especially when beer is involved.


From This Author

  • 25 June 2019

    11 Things I wish I knew before working with Terraform – part 1



  • 13 November 2023

    Delving Deeper Into Generative AI: Unlocking Benefits and Opportunities

  • 07 November 2023

    Retrieval Augmented Generation: Combining LLMs, Task-chaining and Vector Databases

  • 19 September 2023

    The Rise of Vector Databases

  • 27 July 2023

    Large Language Models Automating the Enterprise – Part 2

  • 20 July 2023

    Large Language Models Automating the Enterprise – Part 1

  • 11 July 2023

    Boost Your Game’s Success with Tools – Part 2

  • 04 July 2023

    Boost Your Game’s Success with Tools – Part 1

  • 01 June 2023

    Challenges for Adopting AI Systems in Software Development

  • 07 March 2023

    Will AI Transform Even The Most Creative Professions?

  • 14 February 2023

    Generative AI: Technology of Tomorrow, Today

  • 25 January 2023

    The Joy and Challenge of being a Video Game Tester

  • 14 November 2022

    Can Software Really Be Green

  • 26 July 2022

    Is Data Mesh Going to Replace Centralised Repositories?

  • 09 June 2022

    A Spatial Analysis of the Covid-19 Infection and Its Determinants

  • 17 May 2022

    An R&D Project on AI in 3D Asset Creation for Games

  • 07 February 2022

    Using Two Cloud Vendors Side by Side – a Survey of Cost and Effort

  • 25 January 2022

    Scalable Microservices Architecture with .NET Made Easy – a Tutorial

  • 04 January 2022

    Create Production-Ready, Automated Deliverables Using a Build Pipeline for Games – Part 2

  • 23 November 2021

    How User Experience Design is Increasing ROI

  • 16 November 2021

    Create Production-Ready, Automated Deliverables Using a Build Pipeline for Games – Part 1

  • 19 October 2021

    A Basic Setup for Mass-Testing a Multiplayer Online Board Game

  • 24 August 2021

    EHR to HL7 FHIR Integration: The Software Developer’s Guide – Part 3

  • 20 July 2021

    EHR to HL7 FHIR Integration: The Software Developer’s Guide – Part 2

  • 29 June 2021

    EHR to HL7 FHIR Integration: The Software Developer’s Guide – Part 1

  • 08 June 2021

    Elasticsearch and Apache Lucene: Fundamentals Behind the Relevance Score

  • 27 May 2021

    Endava at NASA’s 2020 Space Apps Challenge

  • 27 January 2021

    Following the Patterns – The Rise of Neo4j and Graph Databases

  • 12 January 2021

    Data is Everything

  • 05 January 2021

    Distributed Agile – Closing the Gap Between the Product Owner and the Team – Part 3

  • 02 December 2020

    8 Tips for Sharing Technical Knowledge – Part 2

  • 12 November 2020

    8 Tips for Sharing Technical Knowledge – Part 1

  • 30 October 2020

    API Management

  • 22 September 2020

    Distributed Agile – Closing the Gap Between the Product Owner and the Team – Part 2

  • 25 August 2020

    Cloud Maturity Level: IaaS vs PaaS and SaaS – Part 2

  • 18 August 2020

    Cloud Maturity Level: IaaS vs PaaS and SaaS – Part 1

  • 08 July 2020

    A Virtual Hackathon Together with Microsoft

  • 30 June 2020

    Distributed safe PI planning

  • 09 June 2020

    The Twisted Concept of Securing Kubernetes Clusters – Part 2

  • 15 May 2020

    Performance and security testing shifting left

  • 30 April 2020

    AR & ML deployment in the wild – a story about friendly animals

  • 16 April 2020

    Cucumber: Automation Framework or Collaboration Tool?

  • 25 February 2020

    Challenges in creating relevant test data without using personally identifiable information

  • 04 January 2020

    Service Meshes – from Kubernetes service management to universal compute fabric

  • 10 December 2019

    AWS Serverless with Terraform – Best Practices

  • 05 November 2019

    The Twisted Concept of Securing Kubernetes Clusters

  • 01 October 2019

    Cognitive Computing Using Cloud-Based Resources II

  • 17 September 2019

    Cognitive Computing Using Cloud-Based Resources

  • 03 September 2019

    Creating A Visual Culture

  • 20 August 2019

    Extracting Data from Images in Presentations

  • 06 August 2019

    Evaluating the current testing trends

  • 23 July 2019

    11 Things I wish I knew before working with Terraform – part 2

  • 12 July 2019

    The Rising Cost of Poor Software Security

  • 09 July 2019

    Developing your Product Owner mindset

  • 25 June 2019

    11 Things I wish I knew before working with Terraform – part 1

  • 30 May 2019

    Microservices and Serverless Computing

  • 14 May 2019

    Edge Services

  • 30 April 2019

    Kubernetes Design Principles Part 1

  • 09 April 2019

    Keeping Up With The Norm In An Era Of Software Defined Everything

  • 25 February 2019

    Infrastructure as Code with Terraform

  • 11 February 2019

    Distributed Agile – Closing the Gap Between the Product Owner and the Team

  • 28 January 2019

    Internet Scale Architecture