Skip directly to search

Skip directly to content

 

Mapping the Future Applications of Artificial Intelligence

 
 

AI | Radu Orghidan |
11 February 2021

Fresh off recording the latest episodes of our Tech Reimagined podcast on Artificial Intelligence (AI) – listen to part 1 and part 2 – we sat down with our experts Radu Orghidan and Boris Cergol and continued the conversation to get further insights on the development and successful applications of AI as well as on the future importance of edge cognitive computing. 

AI requires vast quantities of data. How do you ensure that you will have access to enough high-quality data? 

Radu: The majority of the currently used Machine Learning (ML) models need large amounts of data and are not able to extrapolate their performance in situations that were not covered by the available training data. It is paramount to understand that ML models learn only from data provided during training. To put it simply: AI is only as smart as the people teaching it. The resulting performance reflects either the context in which the training dataset was acquired or the biases of the people that gathered the data. 

Let’s think about a classic object recognition task. We may develop a model that recognises perfectly the mechanical parts on a conveyer belt in a factory. With large quantities of data, the Machine Learning engineer can build a robust model which can recognise the objects independently of their orientation and even up to a certain degree of overlap. However, if someone cuts down a tree that was in front of the window next to the conveyor belt at the time the data was collected, the sun may suddenly hit the scene and create shadows that make the objects unrecognisable for a model trained with consistent lighting conditions, thus reducing the usefulness of that model. 

Boris: As Radu mentioned, using AI is closely tied to having enough high-quality data to train it. Naturally, organisations are making large investments to obtain proprietary data and build the necessary infrastructure to store and process it. Often, data is also seen as a protective moat against potential competitors. However, I think that we will start seeing a decoupling of data and AI, at least in certain domains, in the next few years. 

Language modelling is a good example of this trend. In the past, if you wanted to deploy a language model, you had to train it from scratch using your own data. Later on, it became common practice to start with a model that was pre-trained on a general dataset and fine-tune that on a much smaller sample of data in order to solve a specific problem. Today, the latest models, such as GPT-3 developed by OpenAI, are broad enough to be able to solve a wide variety of different tasks just by seeing a few examples – similarly to how we might train a new employee for a routine task by showing them some examples of what we need them to do. 

Radu: That is a good point. In contrast to machines, humans have the ability to learn new concepts or tasks using just a few examples. Thus, even a small child can understand what a car looks like and recognise one despite never having seen that particular car. 

Boris: I expect that this type of model will greatly expand the use of AI by mitigating the data cold-start problem and also reduce the need for large upfront investments in highly specialised talent or infrastructure. Such models could become a force for the democratisation of AI, but unfortunately, they come with a very important downside: their size. Today, they already contain hundreds of billions of parameters, with some just crossing the trillion-parameter milestone. Training these models is prohibitively expensive for all but the biggest organisations. However, there is compelling evidence that an additional increase in size will lead to even better performance, so we will likely see continued growth. It will be interesting to observe what kind of viable business model will emerge for the massive deployment of such models, and how it will change the balance between data, algorithms and computation. 

Radu: A faster-learning AI will also catalyse science and business, as we will be able to experiment with situations that are difficult or impossible to reproduce, such as finding the right combinations for certain drugs or building visual intelligence for recognising rare species of animals. 

It is, thus, important to bridge the gap between AI and humans. This objective is tackled by Zero-Shot or Few-Shot Learning techniques (FSL), which allow us to solve ML problems using very limited training data and leveraging prior knowledge. Using different FSL approaches, such as weakly supervised learning, imbalanced learning, transfer learning and meta-learning, we can build successful FSL methods with engineering data, models and algorithms. 

In computer vision, for example, we can enrich data through augmentation, i.e. by applying transformations to already existing data. When data augmentation is not enough, we can produce data by generating synthetic datasets that can cover, in a controlled way, the whole domain in which the model is expected to perform. FSL-focused models can adopt strategies such as embedded learning or generative modelling. Finally, the algorithm strategy focuses on techniques which constrain the hypothesis space using prior knowledge or on conditions for convergence using meta-learning methods. 

And how do you ensure the success of AI projects? 

Radu: A successful AI project is one that consumers have access to and use frequently. There usually is a long and tortuous way to reaching that goal. The first obstacles may appear at the start of an AI initiative, as such endeavours are usually innovative ways of performing well-established business processes. Such changes are often met with reticence or even plainly rejected by the people involved. Things get even more complicated when intrapreneurs must face confirmation bias and overconfidence bias from experienced business stakeholders. Robert Nardelli, CEO at Home Depot, famously stated that “there is a fine line between entrepreneurship and insubordination”. 

Therefore, innovative initiatives need a certain degree of isolation from the organisational value chain as well as a clear link to the organisation’s purpose. Striking this balance is an art which ultimately determines the success of an AI project. 

The danger of failing to allocate resources for developing disruptive, innovative solutions is real – as illustrated by the frequently discussed collapse of well-established organisations like Blockbuster, who ignored the streaming model proposed by Netflix, or Polaroid and Kodak, who failed to understand the impact of digital cameras. 

Boris: What qualifies the success of an AI project and the factors that influence it depend a lot on how far along an organisation is on its path of AI adoption. Radu has highlighted some factors that are especially relevant for organisations in the initial stage of their AI initiatives. For those organisations a bit further along, the key questions are how efficiently they are able to scale their models and how well they can govern them after being deployed. 

For the scaling phase, the AI team’s engineering excellence is essential. By this I mean the systematic adoption of best practices which are common in software engineering but sometimes skipped in deadline-driven, innovation-focused AI teams who are in a hurry, such as service-oriented architectures, continuous integration, various types of testing, model versioning, etc. 

Currently, immense resources are being dedicated to AI-related development. Among other things, this also brings a constant stream of new tools to support AI development and deployment. I think it is very advantageous for organisations to not close the door on this part of innovation but to try and design their technology stack and processes in a way that is open to the introduction of new tools, especially those that support a new level of automation. 

Radu: Additionally, we need to be aware of and recognise innovations and trends by having a constant focus on our organisational culture. We need to watch the doctrine and leadership practices on both the customer’s and the solution provider’s side. The latter needs to develop and use tools and technologies aligned with market demand and trends while the first needs to tackle adoption and change management issues. 

Both parties should aim to convey the idea that AI is here to augment humans, not to replace them. This approach sets the premises to obtain user acceptance and attract interest in the technology. Ultimately, we will need to build trust and focus on the user experience, facilitate auditing and create transparency. Digital transformation cannot and should not be treated as a mere technology project. 

Boris: That is a good argument, but when it comes to automation and the idea of striving for AI that augments instead of replaces humans, my view differs from Radu’s. I believe that the extent to which AI is being used to operate entirely without a human in the loop is merely a function of how well the models generalise and handle difficult edge cases. In recent years, we were often surprised by breakthroughs in AI that shortened the expected future timelines a lot, and I think that we are in for many more such surprises in the near-term. However, being open to the fact that, as technology advances, certain functions of the work we do will become more effectively and efficiently delivered by AI will enable organisations and individuals to design work for humans at a higher level of complexity while still requiring human intelligence. 

What is edge cognitive computing? 

Radu: Edge computing is a concept with a distributed architecture of computing devices at its core that enables Internet of Things (IoT) technologies. Here, data is processed on the device, i.e. near the source, rather than being transmitted to a data centre. 

Cognitive computing describes the use of computational models to simulate the human thought process in complex situations where the answers may be ambiguous and uncertain. While AI’s basic use case is to implement the best algorithm to solve a problem, cognitive computing goes one step further and tries to mimic human intelligence and wisdom by analysing a series of factors. One of my favourite definitions is the one provided by IBM: they define cognitive computing as “advanced system that learns at scale, reasons with purpose and interacts with humans in a natural form”. 

Cognitive computing methods augment humans’ capabilities through description, prediction or prescription abilities. When cognitive computing methods are deployed on the edge, we get edge cognitive computing solutions. For example, we recently developed a knowledge management solution using natural language processing (NLP) and computer vision for document classification, both of which run on the edge. Combined with a cloud-operated QnA bot, this enables verbal conversations with users looking for specific information in digitised documents. This illustrates how edge and cloud computing can be combined to deliver the best results possible. Furthermore, since the solution has direct access to users, it can also perform sentiment analysis. 

And what are some future trends of edge computing in AI? 

Boris: I expect that edge computing will play a central role in the future development of AI because it enables some of the most critical use cases and addresses some of the major problems of current AI technology. 

For example, all AI systems that interact with the physical environment, such as autonomous vehicles or robots, have very high requirements regarding latency, security and reliability. These are practically impossible to reach – let alone guarantee – with cloud-based systems, regardless of the availability of networks such as 5G. We will also see similar requirements in many smart medical devices and in certain industrial or energy equipment. The low-latency requirement will also be introduced through emerging consumer applications using AI in smartphones or VR/AR headsets. 

For many other AI use cases, edge computing will become a way to lower operating costs. I think there will be more and more cases of larger companies offering mass consumer applications offloading computational resources to user devices. I wouldn’t be surprised if we were having debates on the ethics of that soon. 

Closely related to the costs is the issue of energy expenditure and inefficiency. This is a growing concern in AI, which is not surprising given the trend towards models requiring ever larger amounts of computation. Edge computing can provide a counterbalance to that. Not only are edge computing devices themselves usually designed to be energy-efficient, but they also introduce upper limits to computational resources, which provide a strong motivation to both researchers and practitioners to develop and deploy highly optimised models that seek a better balance between performance and required resources. 

And lastly, I would like to mention the significance of edge computing for data privacy. We now actually have a variety of different approaches on how to train AI models on data from local devices without having to send this data to a centralised storage. The one with the most traction is federated learning, in which small models are trained on local data and then aggregated in a bigger model. That way, they can learn from all local information without having to actually see any raw data – and the individual users’ information remains secure with them.

Radu Orghidan

VP Cognitive Computing

Radu is a technical business consultant with in-depth knowledge of innovation management. He leads cross-functional teams to achieve cutting edge technical objectives for our clients. His projects use data acquired from different sensors such as depth sensing cameras, mobile robots or microphones to create systems that enhance a users’ abilities to understand and interact with the environment… or sometimes to create 3D scans of his two kids. In his free time Radu loves running outdoors, skiing and being in the snow, or eating seafood while sharing a good bottle of wine.

 

Related Articles

  • 27 September 2022

    AI Art in Game Production – an XDS 2022 Table Discussion

  • 07 December 2021

    Hand in Hand with Artificial Intelligence in the Energy Sector

  • 05 October 2021

    How to Improve Intelligent Energy Storage Systems Using AI

  • 11 February 2021

    Mapping the Future Applications of Artificial Intelligence

  • 27 August 2019

    Taming AI in a Cognitive Driven Business World

 

From This Author

  • 17 May 2022

    An R&D Project on AI in 3D Asset Creation for Games

  • 08 July 2020

    A Virtual Hackathon Together with Microsoft

  • 30 April 2020

    AR & ML Deployment in the Wild – A Story About Friendly Animals

Most Popular Articles

An Anatomy of the Data-Driven Retail Supply Chain
 

Transportation & Logistics Insights | Jeremy Eaton | 25 May 2023

An Anatomy of the Data-Driven Retail Supply Chain

BNPL Regulation to Protect Consumers and Control Third-party Lenders
 

Banking | Annmarie Mahabir | 23 May 2023

BNPL Regulation to Protect Consumers and Control Third-party Lenders

How Offer and Order Management Systems Are Expanding The Aviation Business Model
 

Mobility | Joachim Zintl | 17 May 2023

How Offer and Order Management Systems Are Expanding The Aviation Business Model

Salut! I’m Adriana Calomfirescu
 

Meet the SME | Adriana Calomfirescu | 16 May 2023

Salut! I’m Adriana Calomfirescu

Hi, I’m David Boast
 

Meet the SME | David Boast | 15 May 2023

Hi, I’m David Boast

The Business Impact of Fan Engagement: How to Leverage Technology to Improve Loyalty
 

Innovation | Robert Milner | 12 May 2023

The Business Impact of Fan Engagement: How to Leverage Technology to Improve Loyalty

Staying Relevant – Why Merchants should Embrace Alternative Payment Methods
 

Payments | Steven Purton | 09 May 2023

Staying Relevant – Why Merchants should Embrace Alternative Payment Methods

How IoT is Changing Insurance
 

Insurance Insights | Vince Francis | 02 May 2023

How IoT is Changing Insurance

A Veteran Game Developer's Perspective on Tool Development
 

Automation | Thomas Bedenk | 26 April 2023

A Veteran Game Developer's Perspective on Tool Development

 

Archive

  • 25 May 2023

    An Anatomy of the Data-Driven Retail Supply Chain

  • 23 May 2023

    BNPL Regulation to Protect Consumers and Control Third-party Lenders

  • 17 May 2023

    How Offer and Order Management Systems Are Expanding The Aviation Business Model

  • 16 May 2023

    Salut! I’m Adriana Calomfirescu

  • 15 May 2023

    Hi, I’m David Boast

  • 12 May 2023

    The Business Impact of Fan Engagement: How to Leverage Technology to Improve Loyalty

  • 09 May 2023

    Staying Relevant – Why Merchants should Embrace Alternative Payment Methods

  • 02 May 2023

    How IoT is Changing Insurance

  • 26 April 2023

    A Veteran Game Developer's Perspective on Tool Development

  • 24 April 2023

    How Digital Ecosystems Enhance the Healthcare Experience

  • 21 April 2023

    Green machines: how tech can help companies hit Net Zero targets

  • 20 April 2023

    The Role of People and Technology in the Future of Underwriting

  • 19 April 2023

    Media 2030: Why Advertisers and Publishers Are Racing To Find New Strategies

  • 18 April 2023

    Alright, I’m Adrian Sutherland

  • 14 April 2023

    How Synthetic Data Could Solve The Patient Privacy Dilemma

  • 11 April 2023

    Payments makes the world go round! How banks can get creative

  • 06 April 2023

    Higher Fidelity: Good Outcomes and Harnessing the Challenge of FCA's Consumer Duty

  • 05 April 2023

    AI in Pharma: How Machine Learning is Revolutionising Every Step in Drug Development

  • 04 April 2023

    Hello! I’m Leane Collins

  • 31 March 2023

    The Dos and Don’ts of Successful Carve-Outs in Private Equity

  • 30 March 2023

    Cage of Reason: FCA's new Consumer Duty heralds the rise of the 'Reasonable Insurer'

  • 28 March 2023

    A legal view on the ownership and future of AI-generated works

  • 24 March 2023

    Championing Women in Tech

  • 23 March 2023

    5 Ways Capital Markets Firms Can Ensure Resilient Operations to Improve Credibility and Efficiency

  • 15 March 2023

    Buenas! I’m Leticia Chajchir

  • 14 March 2023

    4 Ways to Improve Customers’ E-Commerce Search Experience

  • 28 February 2023

    4 Healthcare Innovations That Can Benefit People and Profit

  • 21 February 2023

    Hey, I’m Lewis Brown

  • 17 February 2023

    Top Considerations for Financial Services Providers Entering the Cross-Border Payments Space

  • 13 February 2023

    Better Together: Harnessing the Power of Digital Ecosystems

  • 09 February 2023

    What to Include in a Customer Re-Engagement Content Library

  • 07 February 2023

    Supercharging Wealth Management with Hyper-personalisation

  • 02 February 2023

    How Innovating the Insurance Customer Journey Creates a Competitive Advantage

  • 30 January 2023

    G’day, I’m David Marsh

  • 26 January 2023

    Empowering Underwriting and Unlocking Revenue with Legacy Insurance Data Sets

  • 24 January 2023

    Four Stakeholders Who Win the Most When Healthcare Innovates

  • 23 January 2023

    Journey to the Centre of the Cloud with AWS – Part 3

  • 20 January 2023

    Journey to the Centre of the Cloud with AWS – Part 2

  • 18 January 2023

    Journey to the Centre of the Cloud with AWS – Part 1

  • 17 January 2023

    The 4 Most Common Mistakes in Retail Site Design

  • 13 January 2023

    Boost and bolster your innovation. Three tips to help get it to the next level.

  • 10 January 2023

    5 Questions in Smart Energy That Will Define the Net Zero Transition

We are listening

How would you rate your experience with Endava so far?

We would appreciate talking to you about your feedback. Could you share with us your contact details?