Skip directly to search

Skip directly to content

 

Mapping the Future Applications of Artificial Intelligence

 
 

AI | Radu Orghidan |
11 February 2021

Fresh off recording the latest episodes of our Tech Reimagined podcast on Artificial Intelligence (AI) – listen to part 1 and part 2 – we sat down with our experts Radu Orghidan and Boris Cergol and continued the conversation to get further insights on the development and successful applications of AI as well as on the future importance of edge cognitive computing. 

AI requires vast quantities of data. How do you ensure that you will have access to enough high-quality data? 

Radu: The majority of the currently used Machine Learning (ML) models need large amounts of data and are not able to extrapolate their performance in situations that were not covered by the available training data. It is paramount to understand that ML models learn only from data provided during training. To put it simply: AI is only as smart as the people teaching it. The resulting performance reflects either the context in which the training dataset was acquired or the biases of the people that gathered the data. 

Let’s think about a classic object recognition task. We may develop a model that recognises perfectly the mechanical parts on a conveyer belt in a factory. With large quantities of data, the Machine Learning engineer can build a robust model which can recognise the objects independently of their orientation and even up to a certain degree of overlap. However, if someone cuts down a tree that was in front of the window next to the conveyor belt at the time the data was collected, the sun may suddenly hit the scene and create shadows that make the objects unrecognisable for a model trained with consistent lighting conditions, thus reducing the usefulness of that model. 

Boris: As Radu mentioned, using AI is closely tied to having enough high-quality data to train it. Naturally, organisations are making large investments to obtain proprietary data and build the necessary infrastructure to store and process it. Often, data is also seen as a protective moat against potential competitors. However, I think that we will start seeing a decoupling of data and AI, at least in certain domains, in the next few years. 

Language modelling is a good example of this trend. In the past, if you wanted to deploy a language model, you had to train it from scratch using your own data. Later on, it became common practice to start with a model that was pre-trained on a general dataset and fine-tune that on a much smaller sample of data in order to solve a specific problem. Today, the latest models, such as GPT-3 developed by OpenAI, are broad enough to be able to solve a wide variety of different tasks just by seeing a few examples – similarly to how we might train a new employee for a routine task by showing them some examples of what we need them to do. 

Radu: That is a good point. In contrast to machines, humans have the ability to learn new concepts or tasks using just a few examples. Thus, even a small child can understand what a car looks like and recognise one despite never having seen that particular car. 

Boris: I expect that this type of model will greatly expand the use of AI by mitigating the data cold-start problem and also reduce the need for large upfront investments in highly specialised talent or infrastructure. Such models could become a force for the democratisation of AI, but unfortunately, they come with a very important downside: their size. Today, they already contain hundreds of billions of parameters, with some just crossing the trillion-parameter milestone. Training these models is prohibitively expensive for all but the biggest organisations. However, there is compelling evidence that an additional increase in size will lead to even better performance, so we will likely see continued growth. It will be interesting to observe what kind of viable business model will emerge for the massive deployment of such models, and how it will change the balance between data, algorithms and computation. 

Radu: A faster-learning AI will also catalyse science and business, as we will be able to experiment with situations that are difficult or impossible to reproduce, such as finding the right combinations for certain drugs or building visual intelligence for recognising rare species of animals. 

It is, thus, important to bridge the gap between AI and humans. This objective is tackled by Zero-Shot or Few-Shot Learning techniques (FSL), which allow us to solve ML problems using very limited training data and leveraging prior knowledge. Using different FSL approaches, such as weakly supervised learning, imbalanced learning, transfer learning and meta-learning, we can build successful FSL methods with engineering data, models and algorithms. 

In computer vision, for example, we can enrich data through augmentation, i.e. by applying transformations to already existing data. When data augmentation is not enough, we can produce data by generating synthetic datasets that can cover, in a controlled way, the whole domain in which the model is expected to perform. FSL-focused models can adopt strategies such as embedded learning or generative modelling. Finally, the algorithm strategy focuses on techniques which constrain the hypothesis space using prior knowledge or on conditions for convergence using meta-learning methods. 

And how do you ensure the success of AI projects? 

Radu: A successful AI project is one that consumers have access to and use frequently. There usually is a long and tortuous way to reaching that goal. The first obstacles may appear at the start of an AI initiative, as such endeavours are usually innovative ways of performing well-established business processes. Such changes are often met with reticence or even plainly rejected by the people involved. Things get even more complicated when intrapreneurs must face confirmation bias and overconfidence bias from experienced business stakeholders. Robert Nardelli, CEO at Home Depot, famously stated that “there is a fine line between entrepreneurship and insubordination”. 

Therefore, innovative initiatives need a certain degree of isolation from the organisational value chain as well as a clear link to the organisation’s purpose. Striking this balance is an art which ultimately determines the success of an AI project. 

The danger of failing to allocate resources for developing disruptive, innovative solutions is real – as illustrated by the frequently discussed collapse of well-established organisations like Blockbuster, who ignored the streaming model proposed by Netflix, or Polaroid and Kodak, who failed to understand the impact of digital cameras. 

Boris: What qualifies the success of an AI project and the factors that influence it depend a lot on how far along an organisation is on its path of AI adoption. Radu has highlighted some factors that are especially relevant for organisations in the initial stage of their AI initiatives. For those organisations a bit further along, the key questions are how efficiently they are able to scale their models and how well they can govern them after being deployed. 

For the scaling phase, the AI team’s engineering excellence is essential. By this I mean the systematic adoption of best practices which are common in software engineering but sometimes skipped in deadline-driven, innovation-focused AI teams who are in a hurry, such as service-oriented architectures, continuous integration, various types of testing, model versioning, etc. 

Currently, immense resources are being dedicated to AI-related development. Among other things, this also brings a constant stream of new tools to support AI development and deployment. I think it is very advantageous for organisations to not close the door on this part of innovation but to try and design their technology stack and processes in a way that is open to the introduction of new tools, especially those that support a new level of automation. 

Radu: Additionally, we need to be aware of and recognise innovations and trends by having a constant focus on our organisational culture. We need to watch the doctrine and leadership practices on both the customer’s and the solution provider’s side. The latter needs to develop and use tools and technologies aligned with market demand and trends while the first needs to tackle adoption and change management issues. 

Both parties should aim to convey the idea that AI is here to augment humans, not to replace them. This approach sets the premises to obtain user acceptance and attract interest in the technology. Ultimately, we will need to build trust and focus on the user experience, facilitate auditing and create transparency. Digital transformation cannot and should not be treated as a mere technology project. 

Boris: That is a good argument, but when it comes to automation and the idea of striving for AI that augments instead of replaces humans, my view differs from Radu’s. I believe that the extent to which AI is being used to operate entirely without a human in the loop is merely a function of how well the models generalise and handle difficult edge cases. In recent years, we were often surprised by breakthroughs in AI that shortened the expected future timelines a lot, and I think that we are in for many more such surprises in the near-term. However, being open to the fact that, as technology advances, certain functions of the work we do will become more effectively and efficiently delivered by AI will enable organisations and individuals to design work for humans at a higher level of complexity while still requiring human intelligence. 

What is edge cognitive computing? 

Radu: Edge computing is a concept with a distributed architecture of computing devices at its core that enables Internet of Things (IoT) technologies. Here, data is processed on the device, i.e. near the source, rather than being transmitted to a data centre. 

Cognitive computing describes the use of computational models to simulate the human thought process in complex situations where the answers may be ambiguous and uncertain. While AI’s basic use case is to implement the best algorithm to solve a problem, cognitive computing goes one step further and tries to mimic human intelligence and wisdom by analysing a series of factors. One of my favourite definitions is the one provided by IBM: they define cognitive computing as “advanced system that learns at scale, reasons with purpose and interacts with humans in a natural form”. 

Cognitive computing methods augment humans’ capabilities through description, prediction or prescription abilities. When cognitive computing methods are deployed on the edge, we get edge cognitive computing solutions. For example, we recently developed a knowledge management solution using natural language processing (NLP) and computer vision for document classification, both of which run on the edge. Combined with a cloud-operated QnA bot, this enables verbal conversations with users looking for specific information in digitised documents. This illustrates how edge and cloud computing can be combined to deliver the best results possible. Furthermore, since the solution has direct access to users, it can also perform sentiment analysis. 

And what are some future trends of edge computing in AI? 

Boris: I expect that edge computing will play a central role in the future development of AI because it enables some of the most critical use cases and addresses some of the major problems of current AI technology. 

For example, all AI systems that interact with the physical environment, such as autonomous vehicles or robots, have very high requirements regarding latency, security and reliability. These are practically impossible to reach – let alone guarantee – with cloud-based systems, regardless of the availability of networks such as 5G. We will also see similar requirements in many smart medical devices and in certain industrial or energy equipment. The low-latency requirement will also be introduced through emerging consumer applications using AI in smartphones or VR/AR headsets. 

For many other AI use cases, edge computing will become a way to lower operating costs. I think there will be more and more cases of larger companies offering mass consumer applications offloading computational resources to user devices. I wouldn’t be surprised if we were having debates on the ethics of that soon. 

Closely related to the costs is the issue of energy expenditure and inefficiency. This is a growing concern in AI, which is not surprising given the trend towards models requiring ever larger amounts of computation. Edge computing can provide a counterbalance to that. Not only are edge computing devices themselves usually designed to be energy-efficient, but they also introduce upper limits to computational resources, which provide a strong motivation to both researchers and practitioners to develop and deploy highly optimised models that seek a better balance between performance and required resources. 

And lastly, I would like to mention the significance of edge computing for data privacy. We now actually have a variety of different approaches on how to train AI models on data from local devices without having to send this data to a centralised storage. The one with the most traction is federated learning, in which small models are trained on local data and then aggregated in a bigger model. That way, they can learn from all local information without having to actually see any raw data – and the individual users’ information remains secure with them.

Radu Orghidan

VP Cognitive Computing

Radu is a technical business consultant with in-depth knowledge of innovation management. He leads cross-functional teams to achieve cutting edge technical objectives for our clients. His projects use data acquired from different sensors such as depth sensing cameras, mobile robots or microphones to create systems that enhance a users’ abilities to understand and interact with the environment… or sometimes to create 3D scans of his two kids. In his free time Radu loves running outdoors, skiing and being in the snow, or eating seafood while sharing a good bottle of wine.

 

Related Articles

  • 02 March 2021

    How to Improve Interoperability in Healthcare

  • 23 February 2021

    Insurance Insights: Data Exploitation

  • 11 February 2021

    Mapping the Future Applications of Artificial Intelligence

  • 16 June 2020

    Automation in the Age of Digital Necessity

  • 27 August 2019

    Taming AI in a Cognitive Driven Business World

 

From This Author

  • 08 July 2020

    A Virtual Hackathon Together with Microsoft

  • 30 April 2020

    AR & ML Deployment in the Wild – A Story About Friendly Animals

  • 01 October 2019

    Cognitive Computing Using Cloud-Based Resources II

Most Popular Articles

How to Improve Interoperability in Healthcare
 

Next Gen Insights | Les Jordan | 02 March 2021

How to Improve Interoperability in Healthcare

Insurance Insights: Data Exploitation
 

Insurance Insights | Dan Pelos | 23 February 2021

Insurance Insights: Data Exploitation

Insurance Insights: Open Insurance
 

Insurance Insights | Gareth Miller | 16 February 2021

Insurance Insights: Open Insurance

Mapping the Future Applications of Artificial Intelligence
 

AI | Radu Orghidan | 11 February 2021

Mapping the Future Applications of Artificial Intelligence

Celebrating 20 Years of Endava – with Julian Bull
 

The Endava Experience | Julian Bull | 18 December 2020

Celebrating 20 Years of Endava – with Julian Bull

Insurance Industry Trends from DIA Prime Time
 

Next Gen Insights | Robert Anderson | 15 December 2020

Insurance Industry Trends from DIA Prime Time

Celebrating 20 Years of Endava – with Rob Machin
 

The Endava Experience | Rob Machin | 08 December 2020

Celebrating 20 Years of Endava – with Rob Machin

Trends in the Automotive Industry for 2021
 

Mobility | Tony Whitehorn | 03 December 2020

Trends in the Automotive Industry for 2021

Approaching 2021 – Technology Becomes the Business
 

Next Gen Insights | Helena Nimmo | 25 November 2020

Approaching 2021 – Technology Becomes the Business

 

Archive

  • 02 March 2021

    How to Improve Interoperability in Healthcare

  • 23 February 2021

    Insurance Insights: Data Exploitation

  • 16 February 2021

    Insurance Insights: Open Insurance

  • 11 February 2021

    Mapping the Future Applications of Artificial Intelligence

We are listening

How would you rate your experience with Endava so far?

We would appreciate talking to you about your feedback. Could you share with us your contact details?