<img height="1" width="1" style="display:none;" alt="" src="https://px.ads.linkedin.com/collect/?pid=4958233&amp;fmt=gif">
 
RSS Feed

Architecture | Armen Kojekians |
09 April 2019

Cloud technology is here and it is here to stay, irrespective of ongoing debates about its usability, security and ease of adoption. As more businesses change their technology landscape to include public cloud services, it is fair to say that the shift to public cloud is becoming a norm. There are no doubts that data protection, compliance and security are the top priorities on public cloud vendor agendas, which leaves us with the perception that we can finally shift our focus to the higher layers in the stack; application flow and business logic, leaving other aspects of security and infrastructure management to the cloud service provider.

That all looks very promising, but according to the shared responsibility model from one of the leading vendors, apparently the customer still assumes a good part of responsibility for managing the applications and services run “in” the cloud. No doubt we still have some work to do and that is where most of us still need some help understanding the boundaries of responsibility and extra efforts that needs to be considered to make the applications and services secure.

So what has changed about our work in the cloud environment and how does that differ from how we have worked in the past? A typical data centre is built using traditional infrastructure; compute, networking and storage were assembled and setup using vendor-recommended best practices and on top of that, the owner of the data centre had to also build physical security as well. In a shared hosting model, the hosting provider is assuming all the costs associated with the operation of that environment including the physical security costs. Adding a new client to a data centre would entail upfront planning, design, capital expenses (bare-metal infrastructure provisioning) and operational costs. These factors impact both the timeframes and setup costs at the current rate of change in technology, when time to market is key, can be an important factor.

Moving forward to virtualisation and Public Cloud, some of that burden was taken by the cloud service providers. The provisioning is now mostly on virtualised platforms and you can do it at the a click of a button. We are now living in the era of Software Defined Everything (SDE). Yes, this term is now being used relatively frequently, and there I thought I was coining a new phrase!

How does the new SDE concept helps me as a cloud adopter to manage my infrastructure more efficiently and in as secure way?

For a start, you can plan and manage your infrastructure and services to react to the external factors like service utilisation; you can build a layer of abstraction between your application layer and storage; you can define and enforce your own network topology, access control, routing paths, and network segmentation.

Network segmentation is not new as a concept, traditionally it was achieved by using physical network devices such as firewalls, switches and routers which allowed to define and route packets between devices in the same subnet or logical broadcast domain (VLANs). The cloud offers the flexibility and freedom to forget traditional VLAN constructs and build a logical definition of a network. Now every customer can sit on a single VLAN and have an intelligent way of controlling the communication between different assets in that network. We enter the age of Virtual eXtensible LANs (vxlans) which bring new opportunities such as micro-segmentation.

For those who are new to the concept, micro-segmentation allows you to consolidate workloads with the different security needs into separate groups of concern (as shown in Figure 1). As such, it enables two virtual machines on the same network or hypervisor host to operate under independent security contexts. In other words, MachineA can be a web server only allowing port 80 access and MachineB can be a back-end database only allowing port 5342 from the web server. Each of those machines will have a defined security context wrapped around them. This can be achieved using the 5-tuple principle, which means you will be able to control the access to those assets based on five attributes. This has to be achieved at scale and the security policies have to be distributable across physical networks. Sure in the past we had this big firewall and switching hardware, guarding the cluster or machines, except making a change like the above would be considered high-risk and more complex than it needs to be. Now we are moving towards virtualised firewalls which are managed by API and UI based managers.

 

Network segmentation before and after micro-segmentation
Figure 1. Network segmentation before and after micro-segmentation

In the new world, you can route the traffic from VM1 to VM2 at the hypervisor level, without the need to route traffic all the way up to a traditional router. That saves the costs or resource utilisation, network hops and latency.

BEYOND MICRO-SEGMENTATION

The concept of micro-segmentation is and has actively been used in Shared Hosting and Public Clouds for some time. The virtualisation and automation concepts advance and we are now entering a time where Containerisation is the next norm. With this however, comes new challenges but before we move to these, let’s take a step back and talk about what a container is.

In simple terms, a container is effectively packing an application, its configuration and dependencies into a single object, thus making that application portable and allowing multiple instances and versions of it to be deployed and run on the same machine or distributed across a number of environments. Put differently taking the same example of MachineA from above, we can run say 20 containers (applications) inside this single Virtual Machine which all do similar task or something completely different. With this freedom however comes a few security considerations:

Inter communication of containers on same host
Figure 2. Inter communication of containers on same host

As we can see in Figure 2, containers can connect to each other inside the same virtual host machine, making their communication totally invisible to traditional firewalls and networking tools. That gives operations and security teams a hard time to monitor and inspect the traffic to identify malicious use of resources or attempt of lateral network movement.

In the diagram below you can see a topology in which application “containers” can communicate across Virtual Machines, but looking from outside it will look like the Virtual Machines are communicating and it will be challenging to identify which applications are in fact initiating that communication. This also limits the ability to understand and control traffic at a granular level.

 

Intra communication of containers between hosts
Figure 3. Intra communication of containers between hosts

Containers may also need to communicate with other machines, PAAS or shared storage. For example, persistent storage, or databases or services accessed either internally or via internet. That calls for access controls considerations and good practices to limit the attack surface, design a robust and efficient communication and routing strategy in line with the security best practices including secure by design principles.

The diagram below depicts an example of how complex that landscape can be and the concept on micro-segmentation will be able to route the communication efficiently to allow for information exchange for those purposes.

Keeping up with the Norm 5
Figure 4. Communication into and within the container world

ENTER NANO-SEGMENTATION

Nano-Segmentation is able to step in and reach places Micro-Segmentation cannot get to. Any communication which is happening within the virtual machine is able to be intercepted and analysed before it is considered to be a valid communication stream. Now you will be able to group the application in your containers into a logical group and apply security policies to them. The complexity of it is that now you have to manage vast number of policies and (virtual) firewalls. Remember a container is almost just a process inside a Virtual Machine trying to communicate with another container (another process)!

There is need for an orchestration layer that would allow to manage the security policies and the logical segmentation of your application estate and make that consistent, relatively easy to use and in automated manner.

One company which has an answer to this challenge is Twistlock. Not only have they managed to solve this problem at OSI layer 3, they have implemented a layer 7 firewall; CNAF (Cloud Native Application Firewall).

I am not going into a great level of detail, that will be very well a topic for another article, but at a high level Twistlock architecture has four main components:

Intelligence Stream, where threat and venerability information from around 30 upstream commercial providers are gathered related to operating system and image security. Example of these providers are CIS and National Vulnerability Database.
The console, the console used for management and is also the API endpoint for integration. All policies are defined within the console and then pushed to the defenders.
Defenders, are deployed on every host within the environment. For example, if customer is running a Kubernetes cluster, the defender will be deployed on all worker nodes. It is the correlation between the console and defender where all policies are pushed down and used to protect the customer’s workloads.
CI Plugins: Twistlock has a native Jenkins plunging which give lots of visualisation. If your pipeline is formed using other CI, Twislock comes with a CLI - twistcli, which helps you integrate with those platforms.

Keeping up with the Norm 6
Figure 5. Twistlock Architecture (image source)


Other awesome features include:

■ image scanning for vulnerabilities and compliance for the lifecycle of the container.
■ automatic learning runtime protection,
■ automatically learning and segmenting the network, as well as our Web Application Firewall.

In Figure 6 we show how the Twistlock CNAF fits into our cloud environment. Requests are received over the host network and pass through the container network and into Twistlock, before reaching our application. This allows Twistlock to allow valid traffic to reach our application (as shown in the top part of the diagram) or block undesirable traffic (as shown in the bottom part of the diagram.

 

Keeping up with the Norm 7
Figure 6. Twistlock CNAF in action in our cloud environment

To conclude, we have seen how the rapid migration of applications to public cloud platforms can change the nature of our infrastructure work. While some aspects of traditional infrastructure work disappear, other tasks change and some aspects of cloud deployment and operation are new and unfamiliar. We’ve explored some of the key technologies and ideas that we need to master in order to provide robust application environments in a public cloud and hopefully provided a few pointers to guide your journey in this direction.

Armen Kojekians

Senior Infrastructure Consultant

Armen enjoys working on all things Distributed and Cloud, helping clients embrace innovation while moving existing workloads or building new products. The only clouds Armen doesn't like are the kind that keep him and his two kids indoors. He also enjoys experimenting with cooking, but admits it doesn't always produce Master Chef style results.

 

From This Author

  • 30 April 2019

    Kubernetes Design Principles Part 1

 

Archive

  • 13 November 2023

    Delving Deeper Into Generative AI: Unlocking Benefits and Opportunities

  • 07 November 2023

    Retrieval Augmented Generation: Combining LLMs, Task-chaining and Vector Databases

  • 19 September 2023

    The Rise of Vector Databases

  • 27 July 2023

    Large Language Models Automating the Enterprise – Part 2

  • 20 July 2023

    Large Language Models Automating the Enterprise – Part 1

  • 11 July 2023

    Boost Your Game’s Success with Tools – Part 2

  • 04 July 2023

    Boost Your Game’s Success with Tools – Part 1

  • 01 June 2023

    Challenges for Adopting AI Systems in Software Development

  • 07 March 2023

    Will AI Transform Even The Most Creative Professions?

  • 14 February 2023

    Generative AI: Technology of Tomorrow, Today

  • 25 January 2023

    The Joy and Challenge of being a Video Game Tester

  • 14 November 2022

    Can Software Really Be Green

  • 26 July 2022

    Is Data Mesh Going to Replace Centralised Repositories?

  • 09 June 2022

    A Spatial Analysis of the Covid-19 Infection and Its Determinants

  • 17 May 2022

    An R&D Project on AI in 3D Asset Creation for Games

  • 07 February 2022

    Using Two Cloud Vendors Side by Side – a Survey of Cost and Effort

  • 25 January 2022

    Scalable Microservices Architecture with .NET Made Easy – a Tutorial

  • 04 January 2022

    Create Production-Ready, Automated Deliverables Using a Build Pipeline for Games – Part 2

  • 23 November 2021

    How User Experience Design is Increasing ROI

  • 16 November 2021

    Create Production-Ready, Automated Deliverables Using a Build Pipeline for Games – Part 1

  • 19 October 2021

    A Basic Setup for Mass-Testing a Multiplayer Online Board Game

  • 24 August 2021

    EHR to HL7 FHIR Integration: The Software Developer’s Guide – Part 3

  • 20 July 2021

    EHR to HL7 FHIR Integration: The Software Developer’s Guide – Part 2

  • 29 June 2021

    EHR to HL7 FHIR Integration: The Software Developer’s Guide – Part 1

  • 08 June 2021

    Elasticsearch and Apache Lucene: Fundamentals Behind the Relevance Score

  • 27 May 2021

    Endava at NASA’s 2020 Space Apps Challenge

  • 27 January 2021

    Following the Patterns – The Rise of Neo4j and Graph Databases

  • 12 January 2021

    Data is Everything

  • 05 January 2021

    Distributed Agile – Closing the Gap Between the Product Owner and the Team – Part 3

  • 02 December 2020

    8 Tips for Sharing Technical Knowledge – Part 2

  • 12 November 2020

    8 Tips for Sharing Technical Knowledge – Part 1

  • 30 October 2020

    API Management

  • 22 September 2020

    Distributed Agile – Closing the Gap Between the Product Owner and the Team – Part 2

  • 25 August 2020

    Cloud Maturity Level: IaaS vs PaaS and SaaS – Part 2

  • 18 August 2020

    Cloud Maturity Level: IaaS vs PaaS and SaaS – Part 1

  • 08 July 2020

    A Virtual Hackathon Together with Microsoft

  • 30 June 2020

    Distributed safe PI planning

  • 09 June 2020

    The Twisted Concept of Securing Kubernetes Clusters – Part 2

  • 15 May 2020

    Performance and security testing shifting left

  • 30 April 2020

    AR & ML deployment in the wild – a story about friendly animals

  • 16 April 2020

    Cucumber: Automation Framework or Collaboration Tool?

  • 25 February 2020

    Challenges in creating relevant test data without using personally identifiable information

  • 04 January 2020

    Service Meshes – from Kubernetes service management to universal compute fabric

  • 10 December 2019

    AWS Serverless with Terraform – Best Practices

  • 05 November 2019

    The Twisted Concept of Securing Kubernetes Clusters

  • 01 October 2019

    Cognitive Computing Using Cloud-Based Resources II

  • 17 September 2019

    Cognitive Computing Using Cloud-Based Resources

  • 03 September 2019

    Creating A Visual Culture

  • 20 August 2019

    Extracting Data from Images in Presentations

  • 06 August 2019

    Evaluating the current testing trends

  • 23 July 2019

    11 Things I wish I knew before working with Terraform – part 2

  • 12 July 2019

    The Rising Cost of Poor Software Security

  • 09 July 2019

    Developing your Product Owner mindset

  • 25 June 2019

    11 Things I wish I knew before working with Terraform – part 1

  • 30 May 2019

    Microservices and Serverless Computing

  • 14 May 2019

    Edge Services

  • 30 April 2019

    Kubernetes Design Principles Part 1

  • 09 April 2019

    Keeping Up With The Norm In An Era Of Software Defined Everything

  • 25 February 2019

    Infrastructure as Code with Terraform

  • 11 February 2019

    Distributed Agile – Closing the Gap Between the Product Owner and the Team

  • 28 January 2019

    Internet Scale Architecture

OLDER POSTS