<img height="1" width="1" style="display:none;" alt="" src="https://px.ads.linkedin.com/collect/?pid=4958233&amp;fmt=gif">
 
RSS Feed

AI | Boris Cergol |
27 July 2023

In this blog series, we explore various ways how large language models (LLMs) can support different levels of enterprise automation. In part 1, we began with the emerging methods of connecting LLMs to internal data and the notion of ambient intelligence. Now, we’ll progress towards more advanced cases that we see as having a growing impact in the future, explaining the main drivers behind each level of automation.

Intelligent agents

As we delve deeper into the potential of large language models for enterprise automation, we arrive at the concept of intelligent agents. These are autonomous entities capable of observing their environment and directing their actions towards specific goals. The real question then becomes, can we transform large language models into such goal-oriented intelligent agents?

The transformation of LLMs into intelligent agents primarily hinges on two capabilities: self-reflection and code generation.

Self-reflection refers to the model’s capacity to evaluate its output, using it as context or input for subsequent responses. This iterative process allows the model to assess whether its output aligns with the set goal and adjust its responses accordingly. The LLM essentially becomes a self-regulating system, continuously refining its output based on self-analysis and goal alignment.

The capability to generate code, on the other hand, empowers the model to leverage a range of software tools effectively. By generating code that interfaces with various tools or APIs, the model can overcome potential limitations while interacting within digital environments like web browsers.

With these capabilities, the AI system evolves from a passive assistant or ‘copilot’ to an intelligent agent. The system shifts from merely responding to queries towards proactively seeking information and performing tasks that align with its goals.

This shift has profound implications for enterprise automation. It represents a move towards artificial intelligence (AI) systems that can initiate interactions, undertake tasks and even delegate responsibilities. Such proactive intelligent agents can dramatically improve efficiency, streamline processes and significantly reduce the human workload.

In the enterprise context, intelligent agents can be applied in various ways. They can manage and optimise workflows, auto-generate reports and even conduct predictive analyses to aid strategic planning. They can handle routine administrative tasks, thus freeing up people’s time to do more complex tasks.

Moreover, intelligent agents can function as personal assistants, scheduling meetings, managing communications and even proactively providing information or reminders based on context and user preferences. They can also play pivotal roles in customer service, not only by responding to customer queries but also by anticipating and addressing potential customer issues proactively.

The increased capability and autonomy of intelligent agents inevitably come with associated costs, particularly those related to API calls and computations. A viable solution to manage these costs is implementing a hierarchy of agents with varying skill levels. In such a system, lower-skilled agents can handle more straightforward tasks, thus conserving resources. More complex tasks or issues can be escalated to higher-skilled agents. This arrangement optimises the cost-effectiveness of the system by ensuring that resources are only expended when necessary.

As AI systems become increasingly autonomous, a critical challenge emerges: alignment. We need to ensure that the goals of these intelligent agents align with our own. Misalignment can lead to undesirable outcomes, even when the agent is perfectly optimising according to its goal.

Embodied intelligence

In our discussion on how to use large language models in automating enterprises so far, we’ve mainly focused on knowledge work and digital interactions. However, another domain, which is often underrepresented in these discussions, is the world of physical interaction and movement – the domain of robotics. Although it is considered a field of slower progress compared to AI, the potential influence and advancements of LLMs in robotics cannot be underestimated.

The rise of intelligent agents extends beyond virtual environments into the real world, interfacing with APIs that allow them to impact our physical environment. However, there’s another step where the agent doesn’t merely interface with our environment but traverses and manipulates it as a fully embodied entity.

LLMs were previously confined to the realm of processing and generating text. However, advancements over the years have led to successful experiments in linking these models to robotic systems. Consequently, we are beginning to witness a convergence in the progress of large language models and robotics.

Many enterprises are already exploring the benefits of embodied intelligence with humanoid robots under development. These intelligent systems can perform physical tasks, from organising stock in a warehouse, to carrying out maintenance tasks in industrial environments or assisting employees in their daily operations. The impact of these systems within enterprise environments could be far more extensive than anticipated.

One of the most fascinating aspects of embodied intelligence is its potential to redefine how we perceive intelligent systems. As these AI models begin to move, interact and respond in the physical space, they become more tangible and relatable entities. This tangibility increases our capacity to trust and collaborate with these systems, thus strengthening the human-AI synergy within enterprises.

However, the advent of embodied intelligence also amplifies the importance of safety considerations. As these intelligent systems begin to interact with the physical world, the stakes for potential errors or mishaps significantly increase. For instance, a system that incorrectly interprets an instruction could cause damage to physical property or, worse, harm to human lives. Therefore, ensuring that these systems operate safely within their designated parameters becomes a paramount concern.

Boris Cergol

Regional Head of Data, Adriatic

Boris is a machine learning expert with over 15 years of experience across various industries. As Regional Head of Data, he is dedicated to growing our data capabilities in the Adriatic region. Throughout his career, Boris has co-founded and led a data science consultancy, established an AI department in an IT services company, and advised governments on AI strategy and standards. He is passionate about innovation and has a good track record of spotting key technology trends early on. Boris also enjoys sharing his insights through speaking engagements, advising start-ups, and mentoring. Recently, his main areas of interest include large language models and other generative AI technologies.

 

FROM THIS AUTHOR

  • 20 July 2023

    Large Language Models Automating the Enterprise – Part 1

  • 14 February 2023

    Generative AI: Technology of Tomorrow, Today

 

Archive

  • 13 November 2023

    Delving Deeper Into Generative AI: Unlocking Benefits and Opportunities

  • 07 November 2023

    Retrieval Augmented Generation: Combining LLMs, Task-chaining and Vector Databases

  • 19 September 2023

    The Rise of Vector Databases

  • 27 July 2023

    Large Language Models Automating the Enterprise – Part 2

  • 20 July 2023

    Large Language Models Automating the Enterprise – Part 1

  • 11 July 2023

    Boost Your Game’s Success with Tools – Part 2

  • 04 July 2023

    Boost Your Game’s Success with Tools – Part 1

  • 01 June 2023

    Challenges for Adopting AI Systems in Software Development

  • 07 March 2023

    Will AI Transform Even The Most Creative Professions?

  • 14 February 2023

    Generative AI: Technology of Tomorrow, Today

  • 25 January 2023

    The Joy and Challenge of being a Video Game Tester

  • 14 November 2022

    Can Software Really Be Green

  • 26 July 2022

    Is Data Mesh Going to Replace Centralised Repositories?

  • 09 June 2022

    A Spatial Analysis of the Covid-19 Infection and Its Determinants

  • 17 May 2022

    An R&D Project on AI in 3D Asset Creation for Games

  • 07 February 2022

    Using Two Cloud Vendors Side by Side – a Survey of Cost and Effort

  • 25 January 2022

    Scalable Microservices Architecture with .NET Made Easy – a Tutorial

  • 04 January 2022

    Create Production-Ready, Automated Deliverables Using a Build Pipeline for Games – Part 2

  • 23 November 2021

    How User Experience Design is Increasing ROI

  • 16 November 2021

    Create Production-Ready, Automated Deliverables Using a Build Pipeline for Games – Part 1

  • 19 October 2021

    A Basic Setup for Mass-Testing a Multiplayer Online Board Game

  • 24 August 2021

    EHR to HL7 FHIR Integration: The Software Developer’s Guide – Part 3

  • 20 July 2021

    EHR to HL7 FHIR Integration: The Software Developer’s Guide – Part 2

  • 29 June 2021

    EHR to HL7 FHIR Integration: The Software Developer’s Guide – Part 1

  • 08 June 2021

    Elasticsearch and Apache Lucene: Fundamentals Behind the Relevance Score

  • 27 May 2021

    Endava at NASA’s 2020 Space Apps Challenge

  • 27 January 2021

    Following the Patterns – The Rise of Neo4j and Graph Databases

  • 12 January 2021

    Data is Everything

  • 05 January 2021

    Distributed Agile – Closing the Gap Between the Product Owner and the Team – Part 3

  • 02 December 2020

    8 Tips for Sharing Technical Knowledge – Part 2

  • 12 November 2020

    8 Tips for Sharing Technical Knowledge – Part 1

  • 30 October 2020

    API Management

  • 22 September 2020

    Distributed Agile – Closing the Gap Between the Product Owner and the Team – Part 2

  • 25 August 2020

    Cloud Maturity Level: IaaS vs PaaS and SaaS – Part 2

  • 18 August 2020

    Cloud Maturity Level: IaaS vs PaaS and SaaS – Part 1

  • 08 July 2020

    A Virtual Hackathon Together with Microsoft

  • 30 June 2020

    Distributed safe PI planning

  • 09 June 2020

    The Twisted Concept of Securing Kubernetes Clusters – Part 2

  • 15 May 2020

    Performance and security testing shifting left

  • 30 April 2020

    AR & ML deployment in the wild – a story about friendly animals

  • 16 April 2020

    Cucumber: Automation Framework or Collaboration Tool?

  • 25 February 2020

    Challenges in creating relevant test data without using personally identifiable information

  • 04 January 2020

    Service Meshes – from Kubernetes service management to universal compute fabric

  • 10 December 2019

    AWS Serverless with Terraform – Best Practices

  • 05 November 2019

    The Twisted Concept of Securing Kubernetes Clusters

  • 01 October 2019

    Cognitive Computing Using Cloud-Based Resources II

  • 17 September 2019

    Cognitive Computing Using Cloud-Based Resources

  • 03 September 2019

    Creating A Visual Culture

  • 20 August 2019

    Extracting Data from Images in Presentations

  • 06 August 2019

    Evaluating the current testing trends

  • 23 July 2019

    11 Things I wish I knew before working with Terraform – part 2

  • 12 July 2019

    The Rising Cost of Poor Software Security

  • 09 July 2019

    Developing your Product Owner mindset

  • 25 June 2019

    11 Things I wish I knew before working with Terraform – part 1

  • 30 May 2019

    Microservices and Serverless Computing

  • 14 May 2019

    Edge Services

  • 30 April 2019

    Kubernetes Design Principles Part 1

  • 09 April 2019

    Keeping Up With The Norm In An Era Of Software Defined Everything

  • 25 February 2019

    Infrastructure as Code with Terraform

  • 11 February 2019

    Distributed Agile – Closing the Gap Between the Product Owner and the Team

  • 28 January 2019

    Internet Scale Architecture

OLDER POSTS