<img height="1" width="1" style="display:none;" alt="" src="https://px.ads.linkedin.com/collect/?pid=4958233&amp;fmt=gif">
RSS Feed

Architecture | Vlad Cenan |
10 December 2019

After building and managing an AWS Serverless Infrastructure using Terraform over the last 7 months, I want to share some best practices and some common mistakes that I faced during this workflow.

Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently. Terraform can manage existing and popular service providers as well as custom in-house solutions.

Terraform's purpose on this project was to provide and maintain one workflow to provision our AWS Serverless Stack infrastructure. The following is a collection of lessons that I have personally learned, and a few tricks and workarounds that I have come across to get Terraform to behave the way I wanted.

What is Serverless?

Serverless computing is a cloud computing model in which a cloud provider automatically manages the provisioning and allocation of compute resources.

This implementation of serverless architecture is called Functions as a Service (FaaS).

Why Serverless?

Using serverless had a lot of advantages and benefits, especially when the client is focused on cutting costs as they only pay for the execution and duration, and do not have to maintain a server, operating system or installation. Also, serverless offers automated high availability, that reduces the time spent on architecting and configuring these capabilities.

Some disadvantages that we faced included I/O bottlenecks that we couldn’t replicate and the lack of visibility in debugging our application flows.

Best Practice


Hardcoded passwords are not only dangerous because of the threat of hackers and malware, but they also prevent code reusability.

AWS Secret Manager is a great service for managing secrets, storing, retrieving and rotating shared secret keys between resources.

Please heed this advice and store your secrets and keys in a secret manager tool not on a laptop or hardcoded in git.


To create a lambda function, the deployment package needs to be a .zip consisting of your code and any dependencies. The archives are uploaded in an S3 bucket based on a timestamp.

In my case, the timestamp will be the version for lambda functions and the key used by terraform to deploy the proper lambda function:

vcenan@devops:~$ aws s3 ls s3://bucket/lambda-function/

PRE v2019-03-01345

PRE v2019-03-01345

PRE v2019-03-01345

PRE v2019-03-01345

PRE v2019-03-01345

To track the latest builds, a manifest file was added to the project which is constantly updated with every build and tagged based on releases.

From a security perspective, I would recommend S3 Server-Side Encryption, in order to protect sensitive data at rest. If you transfer data to S3, it is TLS encrypted by default.


Adopt a microservice strategy, and store terraform code for each component in separate folders or configuration files.

Instead of setting all dependencies for a resource into one configuration file, break it down into smaller components.

In the example below, the lambda function resource will take the IAM role from another terraform configuration file iam.tf (file responsible with creating all the roles for AWS resources) and will get the role definition from a .json file:

vcenan@devops:~$ cat lambda.tf

resource "aws_lambda_function" "example" {

function_name = "${var.environment}-${var.project}"

s3_bucket = "${var.s3bucket}"

s3_key  = "v${var.app_version}/${var.s3key}"

handler = "main.handler"

runtime = "python3.7"

role = "${aws_iam_role.lambda_exec.arn}"


vcenan@devops:~$ cat iam.tf

# IAM role which dictates what other AWS services the Lambda function may access.

resource "aws_iam_role" "lambda_exec" {

name = "${var.environment}_${var.project}"

assume_role_policy = "${file("iam_role_lambda.json")}"


vcenan@devops:~$ cat iam_role_lambda.json


"Version": "2012-10-17",

"Statement": [{

"Action": "sts:AssumeRole",

"Principal": { "Service": "lambda.amazonaws.com" },

"Effect": "Allow",

"Sid": "" } ]


This will help you in debugging and re-using components and of course for a better visibility.

Passing Variables

Just like any tool or language, Terraform supports variables. All of the typical data types are supported which makes them really useful and flexible. An input variable can be used to populate configuration input for resources or even determine the value of other variables.

In the example below we are getting the remote state (which holds the resources and metadata for the created infrastructure) from the flow that was deployed and map it in our current flow that will deploy the triggers.

vcenan@devops:~$ cat outputs.tf

# Get Remote State

data "terraform_remote_state" "distribution" {

backend = "s3"

config {

bucket = "s3bucket-name"

region = "eu-west-2"

key = "terraformState/dev/distribution.tfstate"



vcenan@devops:~$ cat s3notification.tf

# S3 Notification

resource "aws_s3_bucket_notification" "event_notification" {

bucket = "${var.s3_store}"

lambda_function {

lambda_function_arn = "${data.terraform_remote_state.distribution.distribution-lambda-function_arn}"

events = ["s3:ObjectCreated:Put"]

filter_prefix = "${var.s3_event_ distribution}"


See more ˅

By defining the output for the target lambda, we can reference the ARN (resource name) of the lambda created in this module.

vcenan@devops:~$ cat outputs.tf

output “distribution-lambda-function-id” {

value = “${aws_lambda_function.distribution-lambda-function.id}”


If you want a module output or a resource attribute to be accessible via a remote state, you must thread the output through to a root output.

module "app" {

source = "..."


output "app_value" {

value = "${module.app.value}"



When it gets to more complex architectures like having multiple environments with same resources, things can get unwieldy. In order to avoid copying files between environments, which leads to redundancy, inconsistency, and inefficiency, use terraform modules.

Terraform’s way of creating modules is very simple: create a directory that holds a selection of .tf files. That module can be called in each of the environment modules.

vcenan@devops:~$ cat elasticache/main.tf

resource "aws_elasticache_replication_group" "elasticache-cluster" {

availability_zones = ["us-west-2a", "us-west-2b"]

replication_group_id = "tf-rep-group"

node_type = var.node_type

number_cache_clusters = var.number_cache_cluster

Parameter_group_name = "default.redis3.2"

port = 6379


vcenan@devops:~$ cat environments/dev/main.tf

module "dev-elasticache" {

source = "../../elasticache"

number_cache_clusters = 1

node_type = "cache.m3.medium"


This was we can also make your modules configurable in case we need different parameters in production environment.

vcenan@devops:~$ cat environments/prd/main.tf

module "prd-elasticache" {

source = "../../elasticache"

number_cache_clusters = 3

node_type = "cache.m3.large"


API Gateway

For a better view, use Swagger to define your API Gateway to:

  • keep your Terraform code more concise;
  • have a clear overview of the API definition with an online Swagger editor;

This can be done simply with aws_api_gateway_rest_api terraform resource which will reference the body of the swagger file. The template_file terraform resource will allow you to fill the swagger file body with other terraform resources or outputs and render at runtime.

Unit Tests

A good way to test your infrastructure is to use awsspec ruby gem, a plugin that will test your AWS resources.

In the example below, unit tests are successfully passed for our AWS Dynamo DB deployed:

dynamodb_table ‘metadata’

should exists

should be active

should have key schema “ProcessName”


should eq 5


should eq 5


should eq 29

A disadvantage is that not all AWS resources are covered, like Athena DB or Step Functions, which means they become time-consuming to develop.


Terraform has a detailed log which can be enabled by using the environment variable TF_LOG:


export TF_LOG_PATH=./terraform.log

You can use more log levels and, in this example terraform saves the debug logs from the session in the current location in terraform.log file.

Terraform State

Terraform must store state about your managed infrastructure and configuration. Backends are responsible for storing the state and providing an API for state locking.

If you want to use Terraform as a team on a product, you will need to enable the state locking feature in backend that prevents multiple runs for terraform components.

remote_state {

backend = "s3"

config = {

bucket = "s3-bucket"

key = "${path_relative_to_include()}/terraform.tfstate"

region = "eu-west-1"

encrypt = true

dynamodb_table = "locks"



Vlad Cenan

DevOps Engineer

Vlad is a DevOps engineer with close to a decade of experience across release and systems engineering. He loves Linux, open source, sharing his knowledge and using his troubleshooting superpowers to drive organisational development and system optimisation. And running, he loves running too.


From This Author

  • 25 February 2019

    Infrastructure as Code with Terraform



  • 13 November 2023

    Delving Deeper Into Generative AI: Unlocking Benefits and Opportunities

  • 07 November 2023

    Retrieval Augmented Generation: Combining LLMs, Task-chaining and Vector Databases

  • 19 September 2023

    The Rise of Vector Databases

  • 27 July 2023

    Large Language Models Automating the Enterprise – Part 2

  • 20 July 2023

    Large Language Models Automating the Enterprise – Part 1

  • 11 July 2023

    Boost Your Game’s Success with Tools – Part 2

  • 04 July 2023

    Boost Your Game’s Success with Tools – Part 1

  • 01 June 2023

    Challenges for Adopting AI Systems in Software Development

  • 07 March 2023

    Will AI Transform Even The Most Creative Professions?

  • 14 February 2023

    Generative AI: Technology of Tomorrow, Today

  • 25 January 2023

    The Joy and Challenge of being a Video Game Tester

  • 14 November 2022

    Can Software Really Be Green

  • 26 July 2022

    Is Data Mesh Going to Replace Centralised Repositories?

  • 09 June 2022

    A Spatial Analysis of the Covid-19 Infection and Its Determinants

  • 17 May 2022

    An R&D Project on AI in 3D Asset Creation for Games

  • 07 February 2022

    Using Two Cloud Vendors Side by Side – a Survey of Cost and Effort

  • 25 January 2022

    Scalable Microservices Architecture with .NET Made Easy – a Tutorial

  • 04 January 2022

    Create Production-Ready, Automated Deliverables Using a Build Pipeline for Games – Part 2

  • 23 November 2021

    How User Experience Design is Increasing ROI

  • 16 November 2021

    Create Production-Ready, Automated Deliverables Using a Build Pipeline for Games – Part 1

  • 19 October 2021

    A Basic Setup for Mass-Testing a Multiplayer Online Board Game

  • 24 August 2021

    EHR to HL7 FHIR Integration: The Software Developer’s Guide – Part 3

  • 20 July 2021

    EHR to HL7 FHIR Integration: The Software Developer’s Guide – Part 2

  • 29 June 2021

    EHR to HL7 FHIR Integration: The Software Developer’s Guide – Part 1

  • 08 June 2021

    Elasticsearch and Apache Lucene: Fundamentals Behind the Relevance Score

  • 27 May 2021

    Endava at NASA’s 2020 Space Apps Challenge

  • 27 January 2021

    Following the Patterns – The Rise of Neo4j and Graph Databases

  • 12 January 2021

    Data is Everything

  • 05 January 2021

    Distributed Agile – Closing the Gap Between the Product Owner and the Team – Part 3

  • 02 December 2020

    8 Tips for Sharing Technical Knowledge – Part 2

  • 12 November 2020

    8 Tips for Sharing Technical Knowledge – Part 1

  • 30 October 2020

    API Management

  • 22 September 2020

    Distributed Agile – Closing the Gap Between the Product Owner and the Team – Part 2

  • 25 August 2020

    Cloud Maturity Level: IaaS vs PaaS and SaaS – Part 2

  • 18 August 2020

    Cloud Maturity Level: IaaS vs PaaS and SaaS – Part 1

  • 08 July 2020

    A Virtual Hackathon Together with Microsoft

  • 30 June 2020

    Distributed safe PI planning

  • 09 June 2020

    The Twisted Concept of Securing Kubernetes Clusters – Part 2

  • 15 May 2020

    Performance and security testing shifting left

  • 30 April 2020

    AR & ML deployment in the wild – a story about friendly animals

  • 16 April 2020

    Cucumber: Automation Framework or Collaboration Tool?

  • 25 February 2020

    Challenges in creating relevant test data without using personally identifiable information

  • 04 January 2020

    Service Meshes – from Kubernetes service management to universal compute fabric

  • 10 December 2019

    AWS Serverless with Terraform – Best Practices

  • 05 November 2019

    The Twisted Concept of Securing Kubernetes Clusters

  • 01 October 2019

    Cognitive Computing Using Cloud-Based Resources II

  • 17 September 2019

    Cognitive Computing Using Cloud-Based Resources

  • 03 September 2019

    Creating A Visual Culture

  • 20 August 2019

    Extracting Data from Images in Presentations

  • 06 August 2019

    Evaluating the current testing trends

  • 23 July 2019

    11 Things I wish I knew before working with Terraform – part 2

  • 12 July 2019

    The Rising Cost of Poor Software Security

  • 09 July 2019

    Developing your Product Owner mindset

  • 25 June 2019

    11 Things I wish I knew before working with Terraform – part 1

  • 30 May 2019

    Microservices and Serverless Computing

  • 14 May 2019

    Edge Services

  • 30 April 2019

    Kubernetes Design Principles Part 1

  • 09 April 2019

    Keeping Up With The Norm In An Era Of Software Defined Everything

  • 25 February 2019

    Infrastructure as Code with Terraform

  • 11 February 2019

    Distributed Agile – Closing the Gap Between the Product Owner and the Team

  • 28 January 2019

    Internet Scale Architecture