Architecture
| Gareth Badenhorst |
14 May 2019
SUMMARY
This article examines edge services, which are services close to the client, such as content delivery networks, bot managers and web application firewalls, which distribute data and processing closer to the consumer and thus help scalability and performance. I specifically focus on the Akamai suite of products. This is an article in my series about Internet Scale Architectures. The first in the series can be found here.
INTRODUCTION
Let us assume that you have the challenge of architecting a system that will be used by thousands of users, worldwide. You’re probably using a mixture of Single Page Applications and native mobile applications that consume APIs that are implemented using the microservice architectural style. Your users will expect a rich and performant user experience. Your systems will attract attention from more scum and villainy than Mos Eisley spaceport [1], for example, botnets and DDOS attacks.
There are no real standards in this area, just competing offerings from vendors. I’m going to cover some of the Akamai suite of technologies in this article, since I have worked directly with these solutions and they are a big player in this area, with between 15 – 30 % of internet traffic being delivered through around 240,000 Akamai servers [2], [3]. The products are grouped broadly as follows
■ Web performance – such as caching and Global Traffic Management
■ Cloud security – Web Application Firewalls, DDOS protection and Bot Management
■ Other - phased release cloudlets and content delivery
I have not worked for Akamai, don’t have shares in the company, and don’t even have a T-Shirt of theirs that I got as swag at a conference. Other vendors or solutions you might want to look at include Cloudflare, Amazon Cloudfront and Azure CDN.
FOUNDATIONS
Figure 1 below shows a content origin, which potentially has many consumers connecting to it. The consumers connect directly to the origin, in reality of course this may involve quite a number of intervening hops.
Figure 2 shows the Akamai network in operation. Clients don’t connect to the origin server, but to an Akamai edge server that proxies requests to the origin
Let’s take an example of a company Acme corporation, that exposes its website on hostname www.acme.com, via the Akamai network. A dig on www.acme.com would reveal something very similar to the result below:
ANSWER SECTION
www.acme.com IN CNAME www.acme.com.akadns.net
www.acme.com.akadns.net IN CNAME www.acme.com.edgekey.net
www.acme.com.edgekey.net IN CNAME e1234.a.akamaiedge.net
e1234.a.akamaiedge.net IN A 12.34.56.78
What is going on here? Well, Acme corporation has instructed the domain name registrar for their acme.com domain, to alias the domain name www.acme.com to the canonical name www.acme.com.akadns.net. When an Acme corporation client access www.acme.com, the DNS resolver first resolves www.acme.com via Acme corporations domain name registrar’s authoritative servers. Further DNS resolution is via the Akamai authoritative servers. Akamai resolves the request to the “best” edge server to serve the request. To do this Akamai keeps a dynamic map of the health of the internet using real-time pings, traceroutes and BGP data. Network latency, loss and connectivity are continually monitored. Using this information requests are mapped to a cluster of Akamai servers that are best suited to serve the request (see [4]). Akamai peers with ISPs so the chosen cluster is often collocated with the ISP.
Resolution to a server within the cluster is essentially done via using a consistent hash of the hostname to resolve to an individual server within the cluster (see [4]). This means that all requests to www.acme.com from a particular region will end up on the same server and thus enables content to be cached on that server and reused. The fact that the hashing is consistent means that as edge servers fail or new ones are added, only has minimal effect on the edge server clusters. Consistent hashing is a fundamental algorithmic technique underlying modern distributed architectures. Amongst the co-authors of the original paper on consistent hashing, were Tom Leighton and Daniel Lewin, who were the co-founders of Akamai (see [4]). It is remarkable that they very quickly built on this foundation of work at the intersection of mathematics and theoretical computer science, to create one of a network that lies at the heart of the modern internet. This reminds me, I really need to write an article that really explores consistent hashing!
In summary Akamai uses a real-time picture of how the internet as a whole is performing to route requests to the best possible edge server. Akamai ensures that requests are served from the optimal Akamai edge server cluster and within the cluster request to the same url are served from the same server within the cluster allowing content to be cached in a scalable way.
THE AKAMAI PLATFORM
The edge server network described in the previous section is the platform upon which a number of capabilities are delivered. Let’s examine some categories in turn.
WEB PERFORMANCE
The Akamai edge servers function as proxy servers similar to Apache or Nginx. Akamai places these proxy server capabilities under the heading of web performance but there is actually a whole lot more you can do. The edge servers are configured through a web interface; a REST API exists but in my experience, most of the edge configuration is not automated. The edge server configuration defines a set of rules that are evaluated in order, with succeeding rules taking precedence over preceding ones and where rules can be nested. Each rule consists of a boolean match condition which can be compound followed by a number of behaviours which are actions to be performed. An example showing a match a fragment of the behaviours is shown in Figure 3.
A configuration always needs an origin server behaviour somewhere. The origin server is the server to which requests will be forwarded. The example in Figure 3 shows the use of variables for the origin hostname and also how the forwarded host header can be modified. There are a large number of available match conditions for rules for example, matches on:
■ Client IP
■ File Name
■ Hostname
■ Request Protocol
■ Response Header
■ Response Status Code
There are also a wide range of behaviours that can be applied. The most important for web performance is the caching behaviour that specifies policies on how specific data should be cached on the edge. There are around fifty different behaviours available, for example, adding or modifying headers, rewriting request paths or setting response cookies.
The edge servers can make a real difference in performance when using transport layer security. This is because the user’s TLS negotiation is carried out with a nearby edge server. The edge server itself keeps a long running TLS connection to the origin. I’ve seen this substantially reduce the latency of HTTPS connections.
GLOBAL TRAFFIC MANAGEMENT
Global Traffic Management helps both with performance and with the resilience of your internet scale architecture. Figure 4 shows an example where the origin compute is distributed over three geographic regions. Akamai Global Traffic Management (GTM) helps us to loadbalance traffic over the three regions, but with traffic in general going to the nearest region of the consumer.
GTM is usually used in combination with Akamai edge servers with a URL, for example www.acme.com being exposed on the Akamai edge with a property that has www.acme.com.akadns.net as an origin. The GTM DNS www.acme.com.akadns.net is configured via a web interface. The configuration for the GTM DNS includes
■ A list of hosts, for example amer.acme.com to balance the requests over with a loadbalance weight to be applied to each host
■ A health endpoint, which is a URL path, for example, /health, such that if the <host>/path returns an HTTP 200, then traffic can be routed to <host>
WEB APPLICATION FIREWALL
The Akamai Web Application Firewall (WAF) detects and prevents a wide range of potential attacks before they reach your datacentres. The WAF covers the following categories of defence
■ Network layer controls that allow administrators to block traffic from specific IPs, subnets, CIDR block ranges or geography
■ Application layer controls that inspect traffic for SQL Injection attacks, Cross Site Scripting attacks, Remote File Inclusion attacks and other injective attacks
■ Rate Controls that limit the number of requests from individual IPs or subnets to a configurable number over a period
■ Slow POST requests that triggers alerts or abort actions for requests that come in under a threshold
BOT MANAGER
Bot traffic makes up about 30% of all traffic on the web. There are good bots and bad bots. Good (or goodish) bots include search engine bots, for example, those of Google. Bad bots include those that scan websites looking for vulnerabilities, spam bots that look to post spam advertising, DDOS botnets or bots acting for your company’s competitors that probe your website to determine the prices of your products or to obtain quotes for services, so that they can offer more competitive prices.
Because bots can often be beneficial rather than harmful, Akamai manages bots rather than simply blocking them. The process starts with detecting and identifying the bots. Akamai detects bots based on behaviour and identifies many of them based on signatures. Alerts are raised for those that are not identified by Akamai giving the customer a chance to identify them. This is useful, for example, if you have chosen to use synthetic monitoring to test your applications and the monitoring traffic has been detected as a bot by Bot Manager. Once identified, configurable rules are triggered to manage the bot’s behaviour. Responses can include
■ Monitoring – simply monitor the bot’s behavior
■ Block – deny access
■ Tarpit – serve no response
■ Serve alternate content – serve up a different page or response from ordinary traffic
The last option is particularly useful when dealing with competitor’s who scrape your site for competitive advantage, for example, trying to retrieve the latest prices for goods. In this case you serve different prices to the ones your customers see.
CONCLUSION
Edge services are critical components in an internet scale architecture because they offload a lot of processing from your transactional systems to services close to the consumer. This is also an ideal place to put in place security against certain types of attacks and also to manage the many bots that make up a reasonable portion of internet traffic.
REFERENCES
[1] https://www.akamai.com/uk/en/about/facts-figures.jsp
[2] https://www.akamai.com/uk/en/solutions/intelligent-platform/visualizing-akamai/real-time-web-metrics.jsp
[3] The Akamai Network: A platform for High Performance Internet Applications (retrieved from here)
[4] Consistent Hashing and Random Trees: Distributed Caching protocols for Relieving Hot Spots on the World Wide Web (retrieved from here)
Gareth Badenhorst
Senior Architect
Gareth Badenhorst is a Lead Solution Architect and Design Authority in Endava with more than two decades experience in IT. He has a background as an engineer and still enjoys getting his hands dirty whenever possible. He has travelled the journey from traditional service oriented architectures to microservices and has had a recent focus on the end to end architectural concerns. Gareth likes music, movies and microservices. He also rocks the best mustache this side of Movember.All Categories
Related Articles
-
07 February 2022
Using Two Cloud Vendors Side by Side – a Survey of Cost and Effort
-
25 January 2022
Scalable Microservices Architecture with .NET Made Easy – a Tutorial
-
24 August 2021
EHR to HL7 FHIR Integration: The Software Developer’s Guide – Part 3
-
20 July 2021
EHR to HL7 FHIR Integration: The Software Developer’s Guide – Part 2
-
29 June 2021
EHR to HL7 FHIR Integration: The Software Developer’s Guide – Part 1