<img height="1" width="1" style="display:none;" alt="" src="https://px.ads.linkedin.com/collect/?pid=4958233&amp;fmt=gif">
6 min read
Matt Cloke

Few technologies have burst into the public consciousness as dramatically as generative AI.


The idea of machines that can think – and create – like us has loomed large in humanity’s collective imagination since the days of Ancient Greece. Yet the launch of ChatGPT in late 2022 marked a sea change in AI’s long transition from science fiction to reality. Suddenly, a world of once fantastical possibilities has begun to open up.


The public’s fascination with ChatGPT has spilt over into a broader interest in artificial intelligence (AI). A quick look at five years of Google searches for “AI” speaks for itself.




The world of business is paying particularly close attention.


In Q2 2023 earnings calls, AI was referenced 3.7 times on average per call – over twice the previous quarter’s equivalent average of 1.8.


Yet even generative AI is still in the very early stages of enterprise adoption. An August 2023 McKinsey survey showed that, for most business functions, the percentage of regular users remains in single digits, with the highest adoption at 14% in marketing and sales.


This measured pace makes sense. AI is often challenging for data scientists and machine learning engineers to adopt for specific projects, let alone universally.


Before making investments, leaders need to look past the hype cycle and consider the most important question of all: what do you need your technology to do?


To help you out, we’ve broken down some of the most common subtypes of artificial intelligence, alongside their main applications.


Deep learning


Today’s AI tools, generative or non-generative, are generally the product of machine learning (ML) technology: computer models that use algorithms to notice patterns within large datasets. What they do with these insights is the important bit, and varies from tool to tool.


Deep learning (DL) is a type of ML inspired by the workings of the human brain. At its core, deep learning employs ‘artificial neural networks’ to process information. Imagine these networks as a series of interconnected checkpoints. As information travels through each checkpoint, it gets refined, leading to better decision-making. Notably, when there’s an abundance of data available, deep learning often outperforms traditional machine learning algorithms.


DL is extremely broad in its applications, and most varieties of AI we examine in this article are based on DL technology. Its ability to automatically extract complex features from data makes it particularly well suited for working with language, speech, images or videos.


In the world of research and development, DL is having a particularly interesting impact on the life sciences. As we explain in our piece here in microscopy, for example, automated pattern recognition is now saving researchers from stretches of tedious, preparatory legwork. This both speeds and scales up labs’ capacities, potentially increasing the frequency of major breakthroughs.


At Endava, we’ve helped clients from all sectors apply deep learning to functions as varied as customer service, product personalisation and parsing documents.


Natural language processing


As the parent category of large language models (LLMs) like GPT-4, Bard and LLaMA that have taken the world by storm, natural language processing (NLP) is AI’s most talked-about manifestation. Less a single technology than a multi-disciplinary field of inquiry, it’s all about how computers can understand and replicate the mechanics of human language.


LLMs are just one aspect of natural language processing. If you’ve ever used speech-to-text voice recognition software or spoken to a virtual chatbot, then you’ve seen NLP in action.


NLP offers practical value to organisations of all stripes. If a marketing department wants to dive into the reeds of their customer feedback, an NLP app could parse thousands of reviews and social posts to find the patterns hidden within them. This instant sentiment analysis could save countless hours trawling the internet for anything brand-relevant.


In pharma, this variant of machine learning is revolutionising drug development. Researchers are using deep learning tools to analyse reams of trial data and electronic health records. The insights generated may provide a shortcut to finding groundbreaking new uses for existing drugs.


Predictive analytics


Just like the human brains they’re modelled on, no AI tool can see into the future. But they can construct detailed predictions based on granular analysis of what’s come to pass.


Predictive analytics (PA) counts among the most widely used applications of machine learning. By combining statistical methods with machine learning, PA tools can detect meaningful patterns across large, historical datasets. Those trends are then extrapolated into the future to create scenarios that decision-makers can act on.


Predictive analytics allows businesses to anticipate levels of demand and optimise their inventories in advance of sales spikes - or dips. It also helps retailers and direct-to-consumer businesses predict buying behaviour, right down to the level of individual customers.


Machine learning is already playing a leading role in the green energy transition. At Endava, we’ve helped energy companies use predictive optimisation to serve energy more cheaply. We’ve also seen healthcare organisations auto-generate personalised well-being plans for patients.


For hardware-intensive businesses, PA can also safeguard against equipment issues by gauging the likelihood of machine breakdown or failure.


For any company dealing in uncertainty and risk, like underwriters, actuarial firms and banks, PA allows for a new level of differentiation and precision.


Predictive analytics is not a crystal ball, and the outputs are only as good as the inputs. But with a strong data hygiene regimen, PA can help you sweep away the snags and snares from your path to growth.


Computer vision


Computer vision (CV) is a variant of AI wherein deep learning models are taught to discern and differentiate information contained in images. The training consists of exposing a neural network to an enormous mass of images, plus human guidance on what’s in the pictures. If you’ve ever had to select all squares containing bikes on a website login Captcha, then you’ve helped train a CV model!


A machine that can distinguish between the features of a crowded street scene – and flag anomalies – could aid public safety efforts. Yet these capabilities raise concerns about potential misuse by government agencies. In the meantime, many uses of CV are emerging in the private sector, though these come with their own controversies.


Take autonomous vehicles: if you want to lie back and let your car’s software take the wheel, you need to be able to trust that it can tell a traffic light apart from a lamppost. CV’s level of sophistication and reliability will determine whether self-driving cars can ever become a mass-market reality.


The implications for quality control processes are pretty radical too. Think about a factory floor: a CV-enabled camera pointed at the production line can detect anomalies or defects at scale, much faster than the human eye.


We’ve put computer vision into practice in our work with ZEISS, helping them develop a series of image processing modules to be used in life sciences research.


Making the right choice


Despite grand prognostications of AI replacing human jobs, there’s no evidence yet of a surge in private sector investment. Research from April 2023 shows AI spend remaining steady over the preceding year, with most organisations still in the experimental phase of AI adoption. But organisations who use these tools are already reaping the benefits: 59% report revenue increases as a direct consequence of AI adoption.


There are strong indications that companies’ AI investments will soon ramp up considerably. Almost 50% of top tech executives see AI as the number one item in their budget for the next year. Yet it is doubtful that this will translate into mass redundancies.


We think that this stands to reason. In our view, AI’s true promise never had anything to do with superseding people. Quite the contrary – intelligent systems have the potential to open untapped reserves of real, human creativity. That’s the core of our vision for human-centric AI that fits around your priorities.


To achieve this, it’s important to choose between all these AI applications according to your own objectives. A supply chain company may find a natural language processing tool less useful than a computer vision application. The reverse is probably true for an advertising agency.


If you want to know more about how to sort between cutting-edge technologies to find the most suitable tools, download our Emerging Tech Unpacked research paper.


Thanks for sticking around till the end! That’s all from us for now. But our army of in-house AI specialists is always on hand to help you choose a strategy that works for you. Let’s build something special.


No video selected

Select a video type in the sidebar.