Architecture
| Mike Spille |
16 June 2020
DEALING WITH A CHANGING WORLD
With remote work having now become a standard aspect of our professional lives, the collective desire for digital transformation has evolved into a new imperative that we are calling digital necessity. As a result of large portions of the world converting to ‘stay-at-home’ mode, companies are seeing unprecedented increases in demand across their digital platforms. This is revealing broken processes, unforeseen bottlenecks, and incomplete automation at all levels of a company’s business. And for organisations who are still early on in their digital journey (or have possibly not even embarked upon one yet), the struggle to keep the lights on is even more pronounced.
The good news is that automation technologies have never been more accessible and are now readily adaptable to a wide range of industries. Because of these advances in the underlying technologies, companies no longer need to make massive investments to reap substantial benefits from automation. But how do you get started?
APPROACHING AUTOMATION
Automation is a broad topic that can be approached by organisations at many different levels. Despite its depth as both a subject and a technology, those who want to learn about all the facets of automation and who can comprehensively understand its uses and benefits will have a remarkable advantage to thrive in our contemporary digital landscape.
Since its inception, automation has been a pillar of digital transformations, aiding companies in making their operations and processes more efficient and frictionless, and allowing staff to scale far more effectively. Its themes can be separated into two major categories: business automation technologies and technology automation approaches. The former directly automate business processes, while the latter seek to automate how software and systems are built and delivered. Underpinning all of this are the APIs, data storage, and overall architecture of the system and enterprise as a whole.
Business automation includes strategic digitisation of manual businesses, innovative uses of machine learning (ML), AI and data science (termed together as ‘intelligent automation’), tactical robotic process automation (RPA), and natural language processing (NLP) interfaces. It’s worth noting that intelligent automation has a number of interesting specialties such as computer vision, which will be touched on later.
Technology automation approaches include DevOps, cloud ‘X-as a Service’ models, and test automation. These seek to automate how software, systems, and environments are built and delivered so they can scale easily and rapidly, ensure that results are reproducible, minimise and isolate risk, and enable non-functional requirements such as performance, security, and accessibility to be baked into core architecture and continually tested before software is allowed to escape into the wild.
Once they understand the elements within this toolbox, companies can then use (and combine!) them to create new digital processes where none existed before, or they can improve their existing offerings to meet new challenges. From here on out, I intend to explain what the automation process generally entails, how certain technologies in our automation toolbox work further, and how we can use these tools to create surprising amounts of value for our clients without having to drastically exhaust resources or ourselves.
THE PROCESS
The process for automating nearly any system is pretty straightforward, and usually follows a predictable path. See the figure below for an outline of a typical automation engagement.
Typical automation engagement cycle
These steps may be compressed together or expanded depending on how small or large a given task or process is. For simple tasks, these steps can usually be completed in two weeks or less. For larger, more complex programs of work, the process can potentially span months (or even years) and cover numerous iterations. The key is to start with a reasonably sized task or process, automate that, and then continue to iterate and build upon that success. On that note, continual iterations and feedback loops are critical to make sure that solutions are adding value, and a ‘fail fast’ attitude can go a long way toward converging on a solution quickly while still allowing for innovation.
Along the way, implementation teams need to choose which part or parts of the automation toolbox make the most sense for the problem at hand. Depending on the process, some may prove to be more beneficial than others, as each tool has its own drawbacks and advantages.
THE TOOLS
RPA
Robotic processing automation is a tactical approach to automation that can help organisations get a handle on high volume, painful manual processes (e.g. the ‘low hanging fruit’ of automation). RPA tools grew out of the test automation space, and as is the case with test automation tools, they typically work by programmatically driving the user interface (UI) of your existing applications with little or no human interaction. This includes activities such as scraping web sites, implementing rudimentary decision trees, etc. The tools generally feature a ‘low code’ interface, meaning less technical staff can be empowered to create automation solutions.
The key to dealing with RPA tools is knowing and understanding their limitations. When used successfully, these tools are highly effective and can make your work force far more efficient and able to scale. However, as the complexity of your tasks or processes increases, the utility of these tools declines very rapidly. Smart organisations know when they are approaching the limits of RPA and need to take a more strategic look at their needs and need to pivot and invest in bespoke solutions using traditional application development.
Teams also have to work to ensure their RPA solutions are resilient. Current tools make it all too easy to fall into the trap of interpreting an application UI too literally, which can lead to extremely fragile solutions that break often over time.
That said, RPA can have an impressive impact in areas such as HR, insurance, customer support, and media companies.
For example, in many companies’ HR departments, new joiner processes have traditionally been extremely manual, and RPA can easily automate much of the onboarding process. Large insurance companies have a plethora of paperwork and process labyrinths to navigate, and RPA can help them not only be generally more efficient, but also scale quickly in response to major events that trigger surges in insurance claims.
Automation Fit: Best used in intensive data-entry applications and simple workflows such as HR onboarding, insurance claim processing, etc.
COMPUTER VISION
Computer vision is an area that may appear fairly impenetrable to some IT practitioners. When a typical organisation dips their toes in the waters of computer vision, they are often greeted with enormous amounts of dense artificial intelligence and image processing jargon. This can be incredibly daunting to those who are not experts in the field. There is also the perception in some circles that you need to have reached the scale (and have the budget) of Google or Facebook to realise the benefits of this technology. But in reality, it’s not just about self-driving cars and implementing PhD-level algorithms on your own. Many computer vision projects are dead simple to code with today’s open source solutions.
Computer vision software has matured tremendously in the past decade, and impressive results can be realised with fairly modest investments by non-specialist development teams. Open source libraries such as OpenCV, Google Tesseract, and others have brought computer vision applications in reach to the masses, and with impressive results. So, while computer vision partially falls under the AI/ML umbrella, it’s somewhat unique in having a very accessible API for typical developers.
Additionally, the kinds of applications that are enabled by this technology are very wide and can range from the mundane to true game changers.
Here is a typical example on the mundane side of the scale. Some organisations are confronted with PDFs and images that contain massive data tables that are manually keyed into systems or spreadsheets. This is a tedious, error-prone process that is difficult to scale, and traditional solutions often can’t help you here.
Sample Python-based computer vision code
A computer vision pipeline combines image processing, OCR, and simple algorithms
Yes, OCR can be this easy in some contexts
The end result can reduce hours of data entry into mere seconds. And developing an application like this can be measured in mere days and weeks, not months or years. Further, you can layer AI and algorithmic techniques on top of such a simple solution to semantically interpret the document and its tables as well.
On a larger scale, another example could include news organisations and photo anonymisation. News outlets sometimes need to anonymise individuals in its published photos. Traditionally this involved extensive processing and working by hand in an image processing program like Photoshop, which could be tedious with shots of crowds. But today, open source computer vision software easily allows faces to be automatically detected and masked immediately. In this new flow, humans need only to confirm that the results are adequate. This is a common theme in automation work – humans go from doing the bulk of the work to only processing exception flows and confirming results.
Automation Fit: Great for auto-processing images emailed or faxed to a company, facial detection for further image processing, warehouse auto-monitoring and alerting, and similar applications.
MACHINE LEARNING & DATA SCIENCE
The field of artificial intelligence was stagnant for many years – literally decades, in fact – undergoing what was termed ‘AI Winter’. With the abrupt surge in research (and budgets) focused on the AI sub-fields of machine learning, deep learning, etc., that all changed around 2012.
Machine learning and deep learning took off in 2012
However, unlike the sub-field of computer vision, machine learning is an extremely technical field that requires extensive theoretical knowledge to take full advantage of the technology. And unlike traditional software development, where an engineer explicitly writes code to solve a specific set of problems, machine learning is entirely different, as it involves building models and systems that learn how to solve problems on their own.
Machine learning can solve several different kinds of problems, including:
- Classification. Given a set of inputs, the system classifies or categorises it. Does this image contain a duck? Does this medical data indicate some health risks? Is this transaction fraudulent?
- Translation. Translate an entity in one format to a radically different one. This includes voice recognition, realistic voice synthesis, and optical character recognition.
- Semantic Understanding. Think natural language processing and advanced image recognition in video streams.
- Complex AI Pipelines. To solve multi-faceted, complex real-world problems, it is common for AI engineers to combine multiple machine learning, algorithmic, and data science techniques into an AI pipeline.
A basic ML pipeline for processing PDF files
On that last point, AI pipelines commonly combine a number of technologies to achieve their goals. Depending on the input and output targets, in addition to the machine learning models, AI pipelines can layer together image processing capabilities, pure algorithmic solutions, signal processing libraries, and data science statistical models.
These techniques can be used to automate surprisingly complex manual workflows. Chatbots can automatically be deployed in customer service scenarios to handle commonly asked questions, freeing up call centre personnel to service more critical questions and issues. PDF documents and fax images can be straight-through processed from raw form directly into insurance company systems, with humans reserved for handling exceptions where the AI pipeline fails due to unusual data or unusually poor-quality input. With recent hardware advances, it is even possible to implement these capabilities in phones and tablets. We have built applications which allow external agents to snap pictures of documents at remote sites and fully process the images as documents entirely on-tablet in geographies that have little or no cell phone or wireless service. In our current at-home environment, similar apps could be implemented to allow consumers to snap pictures of documents and automatically obscure sensitive data on-device before sending it to a 3rd party company (with user confirmation, of course).
When combined with data science techniques, ML can also excel at guiding efforts and predicting outcomes around sales leads. Machine learning driven feedback loops can direct sales associates to the most promising leads, and direct them toward actions that can make the lead even more attractive.
Automation Fit: Classifying best-fit sales pipelines for sales organisations, fraud detection, chatbots, a wide range of classification problems and many, many other domains.
DEVOPS
DevOps automation is a little different than the other topics we’ve covered here. The other topics have fallen under the scope of business automation. DevOps automation, by way of contrast, focuses on how to deliver the solution to end users, and falls under the umbrella of technology automation.
DevOps focuses on what happens after a developer commits their code to a source code repository. But how does it get deployed to a testing environment? To production? How are those environments themselves built, upgraded, and evolved?
At its simplest level, DevOps can take existing manual builds and deployments and automate them. With software like Jenkins, DevOps engineers hook into the source control systems and run builds and deployments automatically without user intervention. These builds can include software quality checks, automatic running of various test suites, security scans, etc.
In a similar vein, deployments can be run automatically, or for higher level environments, may be push-button on request setups. This is the bottom layer of DevOps maturity.
With the right kind of architecture, far more ambitious DevOps capabilities can be utilised. Today it is common for cloud native systems to be able to deploy to production with zero time for end users. This involves bringing up new infrastructure in parallel to the production system, testing the new infrastructure and application code live, and then switching load balancers from ‘old’ production infrastructure to the latest version.
Blue-green deployments offer tremendous flexibility and safety to deployments
Such ‘blue-green’ deployments allow teams to test directly in the production environment before releasing to end users. This overcomes a common source of deployment bugs. In a blue-green setup, production is just another environment, and your staff doesn’t have to cross their fingers and toes that the production configuration is correct when you flip the switch on a release.
For even more ambitious microservices- and serverless-based architectures, we can take this to another level and create the infrastructure itself programmatically. Using Infrastructure-as-Code and Infrastructure-as-Configuration techniques, combined with containerisation technology like Docker, and orchestration layers such as Kubernetes, organisations can automate stand-up of entire environments, and make it easy to change their scaling model in response to external events.
Combined with plug-and-play API-based architectures, agile techniques, bi-directionally auto-scaling, and highly available systems, using DevOps technologies and adopting a DevOps culture can not only enable your systems to deploy and scale much more rapidly than ever before, but also enable your developers to be far more productive.
Put simply, the sum of the panoply of DevOps techniques creates safety rails for your teams, which gives them confidence to innovate while preserving your organisation’s standards around performance, security, and uptime. The end result is highly collaborative, high performance teams that are encouraged to innovate in their areas of expertise.
The sum of DevOps practices enables high-performing teams to innovate safely in complex domains
CONCLUSION
In the age of digital necessity, it is imperative for technology leaders to understand what current techniques are available to them, and how even modest investments in automation can have huge impacts on staff efficiency, corporate agility, and usability of their products and services. This combination of techniques can be used to drive efficiencies, but also can unlock new revenue streams and drive customer engagement.
This article is not meant to be comprehensive but is intended to whet your appetite on the possibilities that are enabled by today’s modern automation toolbox. As I mentioned previously, automation is a broad topic, so I’ve hopefully managed to increase your hunger for even more knowledge on the subject.
Mike Spille
VP Enterprise Platform & Enablers
Mike has been delivering software in different roles over the span of a 30+ year career. He has worked in a number of industries including finance, biotech, media, environmental, and pure technology-driven companies. At Endava, he provides technology strategy and architectural insights for clients and partners. When he’s not working or otherwise engaged with his wife and menagerie of children, dogs, and cats, he serves as the Chairman of his Township’s Environmental Commission.All Categories
RELATED ARTICLES
-
05 December 2023
When Considering Synthetic Data, Answer These 3 Questions
-
14 September 2023
The Spark That Drives Machine Learning to Shine
-
07 September 2023
How Offer and Order Management Systems Are Expanding The Aviation Business Model
-
14 July 2023
Streamlining Digital Media Supply Chains with Generative AI
-
29 June 2023
Salut! I’m Radu Orghidan