<img height="1" width="1" style="display:none;" alt="" src="https://px.ads.linkedin.com/collect/?pid=4958233&amp;fmt=gif">
 
RSS Feed

AI | Boris Cergol |
20 July 2023

The recent surge in interest surrounding generative AI is primarily due to the explosive adoption of large language models (LLMs), such as GPT-4. These models have become a hotly debated topic among experts; some view them as the next paradigm shift in artificial intelligence (AI), while others focus on their downsides and limitations.

Despite being an intriguing phenomenon when studied individually, the true value that enterprises stand to gain from these models will only materialise once they are integrated into larger systems. With this in mind, we will explore various ways these models can support different levels of enterprise automation. We’ll begin with simpler methods and progress towards more advanced cases, explaining the main drivers behind each level of automation.

Connecting large language models to internal data

The first step towards automation with large language models is connecting them to internal data, usually achieved through an information retrieval approach based on semantic search. This process maps a user’s query into a semantic space where text segments with similar meanings are closer together. After mapping the query and running a semantic search, the relevant information is sent to the prompt of the language model and incorporated into its context when generating answers. Although this method doesn’t completely solve hallucination issues, it significantly improves the reliability of the content generated by LLMs.

While this design pattern brings order to unstructured textual data within enterprises, much more data exists in structured databases or other structures, such as knowledge graphs. With vast amounts of textual data also come signal-to-noise problems that cannot be overcome with just semantic search. Therefore, it’s crucial to not just rely on this simple design pattern but to also include approaches such as keyword filtering or queries to structured databases to obtain data to be included in the prompt.

This system, at its first level of automation, is principally driven by the proactive engagement of the user in their quest for information. In other words, the system relies heavily on the users’ initiative, who typically seek out information through a user interface, such as a chatbot.

The benefits include easier access to accurate information while saving time and effort – sometimes even unlocking previously inaccessible sources of information – which can lead to better coordination across an enterprise and more rapid updates in decision-making processes.

A problem commonly encountered by companies at this stage of automation is the familiarisation and productisation of the technology. Enterprises often struggle with understanding how to effectively integrate these models into their systems and how to create a user experience that encourages usage across the organisation.

Ambient intelligence

The next frontier in enterprise automation is marked by the advent of ambient intelligence. This involves the creation of workspaces where technology doesn’t just assist but becomes an intrinsic part of the environment – responsive, adaptive and consistently working in harmony with human tasks and processes.

The key to this evolution are multimodal models: AI technologies capable of processing information from a multitude of sources, from spoken words and visual content to written documents. These models operate quietly in the background, processing and storing data from an array of inputs, thereby enriching the contextual understanding and responsiveness of the ambient intelligence environment.

The transformative potential of ambient intelligence is rooted in the dual role AI systems undertake as both archivists and curators. As archivists, AI models capture and digitally encode a myriad of information sources. Everything from meetings to brainstorming sessions and even personal notes are stored, ensuring that every piece of information is preserved, however trivial it may seem.

Conversely, as curators, these systems filter through the extensive data repository, eliminating the noise and highlighting pertinent information. Analysing user interactions and preferences, they offer context-aware recommendations and guide individuals toward useful resources. This intelligent curation revolutionises how we interact with our digital workspaces and enhances our capacity for informed decision-making.

The practical applications of ambient intelligence are as diverse as they are innovative. Consider the automation of taking meeting notes: the AI system transcribes speech in real time, ensuring all critical discussions and action items are accurately captured. This streamlines workflows and significantly boosts the efficiency of meetings. Similarly, applications may range from analysing and understanding screen captures, to converting handwritten notes to digital text, to creating personalised news digests.

Perhaps one of the most exciting aspects of ambient intelligence is the potential for creating personal knowledge bases. By continuously capturing and curating data, AI systems can ensure that valuable tacit knowledge – the unspoken and unwritten information individuals possess – is not lost. This results in a personalised and dynamic knowledge repository that evolves with the user, offering insights and information specifically pertinent to their needs and context.

However, the promising landscape of ambient intelligence also has its challenges, chief among them is trust. As AI systems are entrusted with a plethora of sensitive data, it’s critical to have robust data security and privacy measures in place. Users need the assurance that their information is secure and won’t be misused. Only with this trust can ambient intelligence fully realise its transformative potential.

In part 2, we will continue our exploration of enterprise automation with a look at intelligent agents and embodied intelligence.

Boris Cergol

Regional Head of Data, Adriatic

Boris is a machine learning expert with over 15 years of experience across various industries. As Regional Head of Data, he is dedicated to growing our data capabilities in the Adriatic region. Throughout his career, Boris has co-founded and led a data science consultancy, established an AI department in an IT services company, and advised governments on AI strategy and standards. He is passionate about innovation and has a good track record of spotting key technology trends early on. Boris also enjoys sharing his insights through speaking engagements, advising start-ups, and mentoring. Recently, his main areas of interest include large language models and other generative AI technologies.

 

FROM THIS AUTHOR

  • 27 July 2023

    Large Language Models Automating the Enterprise – Part 2

  • 14 February 2023

    Generative AI: Technology of Tomorrow, Today

 

Archive

  • 13 November 2023

    Delving Deeper Into Generative AI: Unlocking Benefits and Opportunities

  • 07 November 2023

    Retrieval Augmented Generation: Combining LLMs, Task-chaining and Vector Databases

  • 19 September 2023

    The Rise of Vector Databases

  • 27 July 2023

    Large Language Models Automating the Enterprise – Part 2

  • 20 July 2023

    Large Language Models Automating the Enterprise – Part 1

  • 11 July 2023

    Boost Your Game’s Success with Tools – Part 2

  • 04 July 2023

    Boost Your Game’s Success with Tools – Part 1

  • 01 June 2023

    Challenges for Adopting AI Systems in Software Development

  • 07 March 2023

    Will AI Transform Even The Most Creative Professions?

  • 14 February 2023

    Generative AI: Technology of Tomorrow, Today

  • 25 January 2023

    The Joy and Challenge of being a Video Game Tester

  • 14 November 2022

    Can Software Really Be Green

  • 26 July 2022

    Is Data Mesh Going to Replace Centralised Repositories?

  • 09 June 2022

    A Spatial Analysis of the Covid-19 Infection and Its Determinants

  • 17 May 2022

    An R&D Project on AI in 3D Asset Creation for Games

  • 07 February 2022

    Using Two Cloud Vendors Side by Side – a Survey of Cost and Effort

  • 25 January 2022

    Scalable Microservices Architecture with .NET Made Easy – a Tutorial

  • 04 January 2022

    Create Production-Ready, Automated Deliverables Using a Build Pipeline for Games – Part 2

  • 23 November 2021

    How User Experience Design is Increasing ROI

  • 16 November 2021

    Create Production-Ready, Automated Deliverables Using a Build Pipeline for Games – Part 1

  • 19 October 2021

    A Basic Setup for Mass-Testing a Multiplayer Online Board Game

  • 24 August 2021

    EHR to HL7 FHIR Integration: The Software Developer’s Guide – Part 3

  • 20 July 2021

    EHR to HL7 FHIR Integration: The Software Developer’s Guide – Part 2

  • 29 June 2021

    EHR to HL7 FHIR Integration: The Software Developer’s Guide – Part 1

  • 08 June 2021

    Elasticsearch and Apache Lucene: Fundamentals Behind the Relevance Score

  • 27 May 2021

    Endava at NASA’s 2020 Space Apps Challenge

  • 27 January 2021

    Following the Patterns – The Rise of Neo4j and Graph Databases

  • 12 January 2021

    Data is Everything

  • 05 January 2021

    Distributed Agile – Closing the Gap Between the Product Owner and the Team – Part 3

  • 02 December 2020

    8 Tips for Sharing Technical Knowledge – Part 2

  • 12 November 2020

    8 Tips for Sharing Technical Knowledge – Part 1

  • 30 October 2020

    API Management

  • 22 September 2020

    Distributed Agile – Closing the Gap Between the Product Owner and the Team – Part 2

  • 25 August 2020

    Cloud Maturity Level: IaaS vs PaaS and SaaS – Part 2

  • 18 August 2020

    Cloud Maturity Level: IaaS vs PaaS and SaaS – Part 1

  • 08 July 2020

    A Virtual Hackathon Together with Microsoft

  • 30 June 2020

    Distributed safe PI planning

  • 09 June 2020

    The Twisted Concept of Securing Kubernetes Clusters – Part 2

  • 15 May 2020

    Performance and security testing shifting left

  • 30 April 2020

    AR & ML deployment in the wild – a story about friendly animals

  • 16 April 2020

    Cucumber: Automation Framework or Collaboration Tool?

  • 25 February 2020

    Challenges in creating relevant test data without using personally identifiable information

  • 04 January 2020

    Service Meshes – from Kubernetes service management to universal compute fabric

  • 10 December 2019

    AWS Serverless with Terraform – Best Practices

  • 05 November 2019

    The Twisted Concept of Securing Kubernetes Clusters

  • 01 October 2019

    Cognitive Computing Using Cloud-Based Resources II

  • 17 September 2019

    Cognitive Computing Using Cloud-Based Resources

  • 03 September 2019

    Creating A Visual Culture

  • 20 August 2019

    Extracting Data from Images in Presentations

  • 06 August 2019

    Evaluating the current testing trends

  • 23 July 2019

    11 Things I wish I knew before working with Terraform – part 2

  • 12 July 2019

    The Rising Cost of Poor Software Security

  • 09 July 2019

    Developing your Product Owner mindset

  • 25 June 2019

    11 Things I wish I knew before working with Terraform – part 1

  • 30 May 2019

    Microservices and Serverless Computing

  • 14 May 2019

    Edge Services

  • 30 April 2019

    Kubernetes Design Principles Part 1

  • 09 April 2019

    Keeping Up With The Norm In An Era Of Software Defined Everything

  • 25 February 2019

    Infrastructure as Code with Terraform

  • 11 February 2019

    Distributed Agile – Closing the Gap Between the Product Owner and the Team

  • 28 January 2019

    Internet Scale Architecture

OLDER POSTS