<img height="1" width="1" style="display:none;" alt="" src="https://px.ads.linkedin.com/collect/?pid=4958233&amp;fmt=gif">
 
RSS Feed

Architecture | Radu Vunvulea |
30 May 2019

All of us have heard about the success of Netflix, Uber or eBay. All these companies build highly resilient Internet-facing platforms using a fault-tolerant approach based on microservices and serverless computing. Nowadays, businesses are trying to replicate their success and build systems that take advantage of this new way of designing software solutions.

The complexity of a system that is built now, in comparison with 10 or 20 years ago, is different. The complexity is not only higher, but also the NFRs and SLA are much more difficult to achieve. The expectation from customers is availability that is effectively 100%, delivered using less money and smaller teams. This can only be achieved with a new approach to software development that is likely to harness microservices and serverless computing.

We are experiencing a moment in time when architects and technical leads are starting to see the advantages that serverless and microservices bring, and they are trying to adopt these approaches as quickly as possible to replicate the success of other companies like Amazon or Netflix.

In many situations this leads to an argument between microservices fans and those advocating serverless computing. These two groups often advocate passionately for their own favourite technology and won’t consider how the two can be used together, instead insisting that solutions need to follow strict rules of one approach or the other.

Of course, the optimum solution is often somewhere between these two extremes. Improvements in serverless computing are allowing us to do things that were not possible before and the boundary separating these two approaches became blurred, sometimes creating confusion when we need to decide what the right approach is. A serverless system can now run in any cloud provider and even in on-premises systems. This, combined with the possibility to run a serverless function for hours, can mean the two approaches look very similar and make it difficult to select the right approach.

Microservices and Serverless Computing
Figure 1 - A Hybrid Serverless and Microservices Approach

To achieve success with these new technologies, we need to develop new software development tools and practices, and this requires a thorough understanding of the advantages and disadvantages of each new paradigm. Only then can we design systems that take the full advantage of them, and confidently define blueprints for how we migrate to use these technologies.

Serverless

Serverless is a new style of cloud-computing, that focuses the developer’s attention on their business functionality and away from infrastructure concerns. In this article we are specifically interested in the “Function as a Service” (FaaS) rather than “Backend as a Service” (BaaS) variant of serverless computing. Using FaaS involves writing a function that implements a specific API and encapsulates your business logic, which is then deployed to a cloud service such as AWS Lambda, Google Cloud Functions, Azure Functions or IBM OpenWhisk. The function is event-triggered, and in general, the context for the function exists only during the execution – it is ephemeral and not retained between invocations of the function. The infrastructure required to execute the function is managed entirely by the cloud provider and you only pay for the number of executions of the functions and the compute power it consumes.

Microservices

Nowadays microservices is a well-known architectural pattern, that structures an application around a collection of loosely-coupled services that are independently deployable, maintainable and testable, structured around the key concepts in the business domain of the system. This approach enables us to manage complex and large systems using a continuous deployment and delivery approach where systems can evolve and transform with their own pace.

Pros and Cons of Serverless

Starting by considering some of the advantages of serverless computing, we have summarized some of the more important advantages in Figure 2.

One of the significant advantages of the serverless approach is the execution cost. It tends to cost less in cloud computing fees in comparison to microservices. We do not need to pay for all the cluster nodes continually; we pay for what we consume. However, when you do the calculations for long-running functions, you will find that the costs can be close or even higher in comparison with microservices. Cost forecasting is not an easy job when you do not have fixed costs, and your estimation process needs to consider its assumptions carefully to avoid the occasional unexpected explosion in cost.

Microservices and Serverless Computing
Figure 2 - Serverless Advantages

Where serverless computing works very well is situations with many short-lived functions invoked occasionally or unpredictably. In general, the recommendation is to have functions that run only for a few seconds. Many cloud providers have hard limits that are around 300 seconds (although this limit can often be raised after a discussion with the cloud provider).

A more recent development that gives us some new options is to use “durable” functions that allow multiple function invocations to be chained together. This makes the implementation of logic with external dependencies much simpler and avoids us having to introduce message and event-based communication to link our functions together.

From the perspective of the developer and administrator, setting up a serverless environment also straightforward compared to many other approaches in cloud computing. In comparison with a microservice approach specifically, serverless runtime environments take care of many details that we would otherwise need to be aware of including cluster size, system load and how the microservices can communicate between each other and with the external world. Even so, it is important to manage a serverless environment carefully, because spinning new instances of a serverless function is so easy, we can end up forgetting about them and losing control.

Another architectural advantage promised by serverless computing is dynamic scaling, which allows us to focus primarily on the business problem and our code and less on ensuring the scalability of our runtime environment. Today, the dynamic scaling aspects of the serverless platforms works very well, being one of their very strong features, providing on-demand scalability for our customer’s needs without the need to design a complex solution. However, again, nothing is entirely cost free, and when using dynamic scaling we still need to be aware of how possible errors or overload conditions in parts of our system implementation could allow scalability problems to occur, and ensure that we are aware of the possible problems and configure scalability boundaries in our system to avoid them.

Turning to consider the possible “cons” of serverless computing, we have summarized the main ones using the diagram in Figure 3.

One of the most obvious cons is the potential complexity involved in combining our simple pieces of code in our serverless functions with the rest of the system. An example is how to expose our functions as API services. When we deploy our serverless function, it is only available as a private API. To make them available to other people via a public API, we need to use an API Gateway. Typically, we use the API gateway provided by the cloud provider and this allows us to expose our functions to the public internet over an HTTP-based protocol. This is relatively straightforward but is another area of configuration that we must understand, master and perform correctly.

The inherent simplicity of most serverless functions means that they tend to have a fairly high level of external dependencies to a variety of libraries from the technical ecosystem of the language in use. We also may need to use libraries from other technology ecosystems. Managing these third-party libraries in a serverless environment can add significant development and deployment complexity and in extreme cases could lead us to use hybrid approaches involving both microservices and serverless. When the level of dependencies is low and fairly straightforward, then this makes serverless computing straightforward too, but when we need to deal with legacy components and dependencies, a simple serverless approach might not be our best option.

Another limitation of most public-cloud serverless platforms, which may not be immediately obvious when we start out, is that most of them have a hard limit of around 300 seconds of execution duration. There are some business scenarios where this is not enough time to complete the required processing, which means that we need to use a microservice approach instead.

One possible solution is to use a serverless implementation within a dedicated on-premise or private-cloud environment, many of which allow us to configure much longer execution duration limits. At the same time, taking this quick solution can easily result in us designing a system which is actually totally unsuitable for a serverless implementation, ending up with a ‘monolithic’ solution on top of a serverless platform, negating many of the benefits that we are trying to achieve.

Microservices and Serverless Computing
Figure 3 - Serverless disadvantages

Pros and Cons of Microservices

Starting with the benefits of microservices, the most obvious one is the isolation of each microservice. This enables us to manage each service as an individual project, with its own artefacts, pipeline and configuration. As with serverless approaches, communication across services is crucial and having a stable interface is vital for the project success.

A related benefit, where we have containerized our microservices, is that from perspective of the microservice instances, the instances are running in the same sandbox in all environments. This means that we can avoid many common issues caused by different environments or software configuration, as each service has the same software configuration inside the container in all environments.

It is worth noting however, that is easy to confuse the concepts of containers and microservices. Using containers does not mean that you are implementing microservices. Inside a container, you can still have a classical application deployed. And similarly, you do not need to containerize microservices, many of their benefits can be realized using conventional deployment. However, as we have illustrated in the point above, the two technologies are complimentary and often used together.

Another clear benefit from the isolation of the microservices is that scaling a microservice-based system can be achieved and controlled at the service level. This can help us to optimise how computational power and resources like memory are consumed and used. The machines that are part of the cluster can be managed independently, allowing us to change the nature of the cluster without having to affect the services that are running on top of it.

This deployment independence also enables us to change the implementation of a service or two different implementations of a service in parallel without affecting the end customer. This combined with a load balancer layer (and ideally a containerised execution environment) makes microservices a practical, flexible and robust deployment choice. In comparison with serverless, the microservices approach requires a few extra layers to be configured and managed, but in most the cases, other tools, such as microservice orchestration layers, can help to solve this problem.

Like any approach, microservices have their limitations and the most obvious trade-off is the complexity introduced by splitting a monolithic application into independent parts. Just as with the serverless approach, having many microservices implementing a business use case can create a dependency tree that can be difficult to understand and debug and can add performance issues. For example, we have seen situations where it takes 8-9 microservice invocations just to check user credentials and invoke a simple service, are clearly suboptimal and will cause problems.

It is also important to remember that determining the middleware to use for your microservices is a decision you need to make early on, and one that is often forgotten in the rush to adopt the new pattern. In most cases you need to decide what kind of middleware you need for internal and external communication. There are many approaches available, from direct calls to events to message-based communication, so this can be a complex decision for teams to make. There are also proprietary public cloud services such as AWS API Gateway and Azure API Management that are cost effective, robust and scalable, but specific to one provider, which need to be considered as part of the mix.

Finally, beyond the technology, another major challenge for many teams is changing their mindset from a monolithic approach to using microservices confidently, which requires changing the way in which we design systems. For example, moving from a single database approach to multiple databases where data may be duplicated can be a huge mindset change that requires time to understand how services can be decoupled and data dependencies managed.

Summary Comparison

In summary, when comparing the serverless and microservices approaches, the most obvious difference is the additional development and operational overhead that microservices requires including:
Operating system installation and support
Maintenance and support (e.g. operating system updates, security patches)
Monitoring of the operating system
Deployment and configuration of the application
Infrastructure management

A serverless approach provides the immediate benefit of avoiding all of these complexities and allowing the developers to focus on the problem being solved. Even so, a serverless approach brings some complexities and limitations of its own, like:
The need to use external services to be able to deliver the same functionality as a microservice
The need to design FaaS services to overcome resource limitation on disk space, RAM and execution duration
The complexity of integrating legacy dependencies on different stacks or systems

Some of these difficulties can be overcome with recent developments in serverless systems like durable functions, state machines, and increased execution time limits, but there will always be some complexities that need to be handled. Looking beyond today’s implementations there are also emerging solutions like Kubeless, that will further change the way we look at serverless computing.

Hybrid Solutions

As we have seen both serverless and microservices have their strengths and weaknesses, so a hybrid approach that combines microservices with serverless might be the key to success. It enables us to migrate the legacy systems and run in a controlled environment all the external dependencies that we have. In the meantime, we can use serverless and microservices architecture for the current and future needs of the system. Multiple technology stacks can be combined inside containers, and serverless can be used where it is necessary.

Microservices and Serverless Computing
Figure 4 - Hybrid approach

Kubernetes with Kubeless is just an example of how the two architecture styles are merging — enabling us to have only one physical infrastructure, that supports both approaches. However, one of the downsides to bear in mind of an approach like this is the combination of two different architectural styles, that can create quite a lot of confusion if not carefully managed.

Conclusion

Serverless architecture is the new thing for building cloud-based systems and it is an exciting and interesting topic for development teams and operations teams alike. However, like all technologies new or old, it comes with some limitations as well as obvious potential benefits. In contrast, microservices have been around for a while, can fulfil most business and technology requirements, and are better understood. Achieving success with either involves understanding the business requirements and the stakeholder expectations for the system that we are designing. We have found that merging these two different architectural styles and technologies can combine the strengths of both, but when doing so we need to be aware of the benefits and trade-offs inherent in doing so, which can be complex.

Radu Vunvulea

Group Head of Cloud Capability

Radu is a technology enthusiast. He has vast experience in different technologies and industries and spends most of his time working with the cloud, helping companies to innovate and finding solutions to their business problems. He enjoys connecting people and helping them to grow. When he isn’t blogging or speaking at events, Radu likes to build his own IOT devices, he enjoys an early morning run or a hike up a mountain. His feet seem to be his preferred method of transportation, he would do anything not to be stuck in traffic.

 

From This Author

  • 07 February 2022

    Using Two Cloud Vendors Side by Side – a Survey of Cost and Effort

  • 25 August 2020

    Cloud Maturity Level: IaaS vs PaaS and SaaS – Part 2

  • 18 August 2020

    Cloud Maturity Level: IaaS vs PaaS and SaaS – Part 1

 

Archive

  • 13 November 2023

    Delving Deeper Into Generative AI: Unlocking Benefits and Opportunities

  • 07 November 2023

    Retrieval Augmented Generation: Combining LLMs, Task-chaining and Vector Databases

  • 19 September 2023

    The Rise of Vector Databases

  • 27 July 2023

    Large Language Models Automating the Enterprise – Part 2

  • 20 July 2023

    Large Language Models Automating the Enterprise – Part 1

  • 11 July 2023

    Boost Your Game’s Success with Tools – Part 2

  • 04 July 2023

    Boost Your Game’s Success with Tools – Part 1

  • 01 June 2023

    Challenges for Adopting AI Systems in Software Development

  • 07 March 2023

    Will AI Transform Even The Most Creative Professions?

  • 14 February 2023

    Generative AI: Technology of Tomorrow, Today

  • 25 January 2023

    The Joy and Challenge of being a Video Game Tester

  • 14 November 2022

    Can Software Really Be Green

  • 26 July 2022

    Is Data Mesh Going to Replace Centralised Repositories?

  • 09 June 2022

    A Spatial Analysis of the Covid-19 Infection and Its Determinants

  • 17 May 2022

    An R&D Project on AI in 3D Asset Creation for Games

  • 07 February 2022

    Using Two Cloud Vendors Side by Side – a Survey of Cost and Effort

  • 25 January 2022

    Scalable Microservices Architecture with .NET Made Easy – a Tutorial

  • 04 January 2022

    Create Production-Ready, Automated Deliverables Using a Build Pipeline for Games – Part 2

  • 23 November 2021

    How User Experience Design is Increasing ROI

  • 16 November 2021

    Create Production-Ready, Automated Deliverables Using a Build Pipeline for Games – Part 1

  • 19 October 2021

    A Basic Setup for Mass-Testing a Multiplayer Online Board Game

  • 24 August 2021

    EHR to HL7 FHIR Integration: The Software Developer’s Guide – Part 3

  • 20 July 2021

    EHR to HL7 FHIR Integration: The Software Developer’s Guide – Part 2

  • 29 June 2021

    EHR to HL7 FHIR Integration: The Software Developer’s Guide – Part 1

  • 08 June 2021

    Elasticsearch and Apache Lucene: Fundamentals Behind the Relevance Score

  • 27 May 2021

    Endava at NASA’s 2020 Space Apps Challenge

  • 27 January 2021

    Following the Patterns – The Rise of Neo4j and Graph Databases

  • 12 January 2021

    Data is Everything

  • 05 January 2021

    Distributed Agile – Closing the Gap Between the Product Owner and the Team – Part 3

  • 02 December 2020

    8 Tips for Sharing Technical Knowledge – Part 2

  • 12 November 2020

    8 Tips for Sharing Technical Knowledge – Part 1

  • 30 October 2020

    API Management

  • 22 September 2020

    Distributed Agile – Closing the Gap Between the Product Owner and the Team – Part 2

  • 25 August 2020

    Cloud Maturity Level: IaaS vs PaaS and SaaS – Part 2

  • 18 August 2020

    Cloud Maturity Level: IaaS vs PaaS and SaaS – Part 1

  • 08 July 2020

    A Virtual Hackathon Together with Microsoft

  • 30 June 2020

    Distributed safe PI planning

  • 09 June 2020

    The Twisted Concept of Securing Kubernetes Clusters – Part 2

  • 15 May 2020

    Performance and security testing shifting left

  • 30 April 2020

    AR & ML deployment in the wild – a story about friendly animals

  • 16 April 2020

    Cucumber: Automation Framework or Collaboration Tool?

  • 25 February 2020

    Challenges in creating relevant test data without using personally identifiable information

  • 04 January 2020

    Service Meshes – from Kubernetes service management to universal compute fabric

  • 10 December 2019

    AWS Serverless with Terraform – Best Practices

  • 05 November 2019

    The Twisted Concept of Securing Kubernetes Clusters

  • 01 October 2019

    Cognitive Computing Using Cloud-Based Resources II

  • 17 September 2019

    Cognitive Computing Using Cloud-Based Resources

  • 03 September 2019

    Creating A Visual Culture

  • 20 August 2019

    Extracting Data from Images in Presentations

  • 06 August 2019

    Evaluating the current testing trends

  • 23 July 2019

    11 Things I wish I knew before working with Terraform – part 2

  • 12 July 2019

    The Rising Cost of Poor Software Security

  • 09 July 2019

    Developing your Product Owner mindset

  • 25 June 2019

    11 Things I wish I knew before working with Terraform – part 1

  • 30 May 2019

    Microservices and Serverless Computing

  • 14 May 2019

    Edge Services

  • 30 April 2019

    Kubernetes Design Principles Part 1

  • 09 April 2019

    Keeping Up With The Norm In An Era Of Software Defined Everything

  • 25 February 2019

    Infrastructure as Code with Terraform

  • 11 February 2019

    Distributed Agile – Closing the Gap Between the Product Owner and the Team

  • 28 January 2019

    Internet Scale Architecture

OLDER POSTS