All of us have heard about the success of Netflix, Uber or eBay. All these companies build highly resilient Internet-facing platforms using a fault-tolerant approach based on microservices and serverless computing. Nowadays, businesses are trying to replicate their success and build systems that take advantage of this new way of designing software solutions.
The complexity of a system that is built now, in comparison with 10 or 20 years ago, is different. The complexity is not only higher, but also the NFRs and SLA are much more difficult to achieve. The expectation from customers is availability that is effectively 100%, delivered using less money and smaller teams. This can only be achieved with a new approach to software development that is likely to harness microservices and serverless computing.
We are experiencing a moment in time when architects and technical leads are starting to see the advantages that serverless and microservices bring, and they are trying to adopt these approaches as quickly as possible to replicate the success of other companies like Amazon or Netflix.
In many situations this leads to an argument between microservices fans and those advocating serverless computing. These two groups often advocate passionately for their own favourite technology and won’t consider how the two can be used together, instead insisting that solutions need to follow strict rules of one approach or the other.
Of course, the optimum solution is often somewhere between these two extremes. Improvements in serverless computing are allowing us to do things that were not possible before and the boundary separating these two approaches became blurred, sometimes creating confusion when we need to decide what the right approach is. A serverless system can now run in any cloud provider and even in on-premises systems. This, combined with the possibility to run a serverless function for hours, can mean the two approaches look very similar and make it difficult to select the right approach.
To achieve success with these new technologies, we need to develop new software development tools and practices, and this requires a thorough understanding of the advantages and disadvantages of each new paradigm. Only then can we design systems that take the full advantage of them, and confidently define blueprints for how we migrate to use these technologies.
Serverless is a new style of cloud-computing, that focuses the developer’s attention on their business functionality and away from infrastructure concerns. In this article we are specifically interested in the “Function as a Service” (FaaS) rather than “Backend as a Service” (BaaS) variant of serverless computing. Using FaaS involves writing a function that implements a specific API and encapsulates your business logic, which is then deployed to a cloud service such as AWS Lambda, Google Cloud Functions, Azure Functions or IBM OpenWhisk. The function is event-triggered, and in general, the context for the function exists only during the execution – it is ephemeral and not retained between invocations of the function. The infrastructure required to execute the function is managed entirely by the cloud provider and you only pay for the number of executions of the functions and the compute power it consumes.
Nowadays microservices is a well-known architectural pattern, that structures an application around a collection of loosely-coupled services that are independently deployable, maintainable and testable, structured around the key concepts in the business domain of the system. This approach enables us to manage complex and large systems using a continuous deployment and delivery approach where systems can evolve and transform with their own pace.
Pros and Cons of Serverless
Starting by considering some of the advantages of serverless computing, we have summarized some of the more important advantages in Figure 2.
One of the significant advantages of the serverless approach is the execution cost. It tends to cost less in cloud computing fees in comparison to microservices. We do not need to pay for all the cluster nodes continually; we pay for what we consume. However, when you do the calculations for long-running functions, you will find that the costs can be close or even higher in comparison with microservices. Cost forecasting is not an easy job when you do not have fixed costs, and your estimation process needs to consider its assumptions carefully to avoid the occasional unexpected explosion in cost.
Where serverless computing works very well is situations with many short-lived functions invoked occasionally or unpredictably. In general, the recommendation is to have functions that run only for a few seconds. Many cloud providers have hard limits that are around 300 seconds (although this limit can often be raised after a discussion with the cloud provider).
A more recent development that gives us some new options is to use “durable” functions that allow multiple function invocations to be chained together. This makes the implementation of logic with external dependencies much simpler and avoids us having to introduce message and event-based communication to link our functions together.
From the perspective of the developer and administrator, setting up a serverless environment also straightforward compared to many other approaches in cloud computing. In comparison with a microservice approach specifically, serverless runtime environments take care of many details that we would otherwise need to be aware of including cluster size, system load and how the microservices can communicate between each other and with the external world. Even so, it is important to manage a serverless environment carefully, because spinning new instances of a serverless function is so easy, we can end up forgetting about them and losing control.
Another architectural advantage promised by serverless computing is dynamic scaling, which allows us to focus primarily on the business problem and our code and less on ensuring the scalability of our runtime environment. Today, the dynamic scaling aspects of the serverless platforms works very well, being one of their very strong features, providing on-demand scalability for our customer’s needs without the need to design a complex solution. However, again, nothing is entirely cost free, and when using dynamic scaling we still need to be aware of how possible errors or overload conditions in parts of our system implementation could allow scalability problems to occur, and ensure that we are aware of the possible problems and configure scalability boundaries in our system to avoid them.
Turning to consider the possible “cons” of serverless computing, we have summarized the main ones using the diagram in Figure 3.
One of the most obvious cons is the potential complexity involved in combining our simple pieces of code in our serverless functions with the rest of the system. An example is how to expose our functions as API services. When we deploy our serverless function, it is only available as a private API. To make them available to other people via a public API, we need to use an API Gateway. Typically, we use the API gateway provided by the cloud provider and this allows us to expose our functions to the public internet over an HTTP-based protocol. This is relatively straightforward but is another area of configuration that we must understand, master and perform correctly.
The inherent simplicity of most serverless functions means that they tend to have a fairly high level of external dependencies to a variety of libraries from the technical ecosystem of the language in use. We also may need to use libraries from other technology ecosystems. Managing these third-party libraries in a serverless environment can add significant development and deployment complexity and in extreme cases could lead us to use hybrid approaches involving both microservices and serverless. When the level of dependencies is low and fairly straightforward, then this makes serverless computing straightforward too, but when we need to deal with legacy components and dependencies, a simple serverless approach might not be our best option.
Another limitation of most public-cloud serverless platforms, which may not be immediately obvious when we start out, is that most of them have a hard limit of around 300 seconds of execution duration. There are some business scenarios where this is not enough time to complete the required processing, which means that we need to use a microservice approach instead.
One possible solution is to use a serverless implementation within a dedicated on-premise or private-cloud environment, many of which allow us to configure much longer execution duration limits. At the same time, taking this quick solution can easily result in us designing a system which is actually totally unsuitable for a serverless implementation, ending up with a ‘monolithic’ solution on top of a serverless platform, negating many of the benefits that we are trying to achieve.
Pros and Cons of Microservices
Starting with the benefits of microservices, the most obvious one is the isolation of each microservice. This enables us to manage each service as an individual project, with its own artefacts, pipeline and configuration. As with serverless approaches, communication across services is crucial and having a stable interface is vital for the project success.
A related benefit, where we have containerized our microservices, is that from perspective of the microservice instances, the instances are running in the same sandbox in all environments. This means that we can avoid many common issues caused by different environments or software configuration, as each service has the same software configuration inside the container in all environments.
It is worth noting however, that is easy to confuse the concepts of containers and microservices. Using containers does not mean that you are implementing microservices. Inside a container, you can still have a classical application deployed. And similarly, you do not need to containerize microservices, many of their benefits can be realized using conventional deployment. However, as we have illustrated in the point above, the two technologies are complimentary and often used together.
Another clear benefit from the isolation of the microservices is that scaling a microservice-based system can be achieved and controlled at the service level. This can help us to optimise how computational power and resources like memory are consumed and used. The machines that are part of the cluster can be managed independently, allowing us to change the nature of the cluster without having to affect the services that are running on top of it.
This deployment independence also enables us to change the implementation of a service or two different implementations of a service in parallel without affecting the end customer. This combined with a load balancer layer (and ideally a containerised execution environment) makes microservices a practical, flexible and robust deployment choice. In comparison with serverless, the microservices approach requires a few extra layers to be configured and managed, but in most the cases, other tools, such as microservice orchestration layers, can help to solve this problem.
Like any approach, microservices have their limitations and the most obvious trade-off is the complexity introduced by splitting a monolithic application into independent parts. Just as with the serverless approach, having many microservices implementing a business use case can create a dependency tree that can be difficult to understand and debug and can add performance issues. For example, we have seen situations where it takes 8-9 microservice invocations just to check user credentials and invoke a simple service, are clearly suboptimal and will cause problems.
It is also important to remember that determining the middleware to use for your microservices is a decision you need to make early on, and one that is often forgotten in the rush to adopt the new pattern. In most cases you need to decide what kind of middleware you need for internal and external communication. There are many approaches available, from direct calls to events to message-based communication, so this can be a complex decision for teams to make. There are also proprietary public cloud services such as AWS API Gateway and Azure API Management that are cost effective, robust and scalable, but specific to one provider, which need to be considered as part of the mix.
Finally, beyond the technology, another major challenge for many teams is changing their mindset from a monolithic approach to using microservices confidently, which requires changing the way in which we design systems. For example, moving from a single database approach to multiple databases where data may be duplicated can be a huge mindset change that requires time to understand how services can be decoupled and data dependencies managed.
In summary, when comparing the serverless and microservices approaches, the most obvious difference is the additional development and operational overhead that microservices requires including:
■ Operating system installation and support
■ Maintenance and support (e.g. operating system updates, security patches)
■ Monitoring of the operating system
■ Deployment and configuration of the application
■ Infrastructure management
A serverless approach provides the immediate benefit of avoiding all of these complexities and allowing the developers to focus on the problem being solved. Even so, a serverless approach brings some complexities and limitations of its own, like:
■ The need to use external services to be able to deliver the same functionality as a microservice
■ The need to design FaaS services to overcome resource limitation on disk space, RAM and execution duration
■ The complexity of integrating legacy dependencies on different stacks or systems
Some of these difficulties can be overcome with recent developments in serverless systems like durable functions, state machines, and increased execution time limits, but there will always be some complexities that need to be handled. Looking beyond today’s implementations there are also emerging solutions like Kubeless, that will further change the way we look at serverless computing.
As we have seen both serverless and microservices have their strengths and weaknesses, so a hybrid approach that combines microservices with serverless might be the key to success. It enables us to migrate the legacy systems and run in a controlled environment all the external dependencies that we have. In the meantime, we can use serverless and microservices architecture for the current and future needs of the system. Multiple technology stacks can be combined inside containers, and serverless can be used where it is necessary.
Kubernetes with Kubeless is just an example of how the two architecture styles are merging — enabling us to have only one physical infrastructure, that supports both approaches. However, one of the downsides to bear in mind of an approach like this is the combination of two different architectural styles, that can create quite a lot of confusion if not carefully managed.
Serverless architecture is the new thing for building cloud-based systems and it is an exciting and interesting topic for development teams and operations teams alike. However, like all technologies new or old, it comes with some limitations as well as obvious potential benefits. In contrast, microservices have been around for a while, can fulfil most business and technology requirements, and are better understood. Achieving success with either involves understanding the business requirements and the stakeholder expectations for the system that we are designing. We have found that merging these two different architectural styles and technologies can combine the strengths of both, but when doing so we need to be aware of the benefits and trade-offs inherent in doing so, which can be complex.
Group Head of Cloud CapabilityRadu is a technology enthusiast. He has vast experience in different technologies and industries and spends most of his time working with the cloud, helping companies to innovate and finding solutions to their business problems. He enjoys connecting people and helping them to grow. When he isn’t blogging or speaking at events, Radu likes to build his own IOT devices, he enjoys an early morning run or a hike up a mountain. His feet seem to be his preferred method of transportation, he would do anything not to be stuck in traffic.
07 February 2022
Using Two Cloud Vendors Side by Side – a Survey of Cost and Effort
25 January 2022
Scalable Microservices Architecture with .NET Made Easy – a Tutorial
24 August 2021
EHR to HL7 FHIR Integration: The Software Developer’s Guide – Part 3
20 July 2021
EHR to HL7 FHIR Integration: The Software Developer’s Guide – Part 2
29 June 2021
EHR to HL7 FHIR Integration: The Software Developer’s Guide – Part 1