<img height="1" width="1" style="display:none;" alt="" src="https://px.ads.linkedin.com/collect/?pid=4958233&amp;fmt=gif">
5 min read
Seth Clifford and Richard Pugh

For highly regulated industries, AI has been a challenging technology to fully embrace. We sat down with Richard Pugh (SVP, Head of Data and AI Strategy) and Seth Clifford (SVP, AI Product Strategy) to explore why transparency in data processes is not just a regulatory obligation but also a cornerstone for trust and innovation. From finding the right balance between technology and human influence to complex challenges and promising opportunities presented by large language models (LLMs), join some of our data and AI experts for a thoughtful Q&A.


Setting the stage for our conversation, let’s talk about a recent IDC report which revealed that 53% of business leaders deem retaining human influence on AI solutions very or extremely important. How do you see the interplay between humans and AI?


  • Richard: I believe the ‘human in the loop’ will always be an important aspect of the design of any AI system. However, the balance of human and machine is always going to depend on factors related to the specific use case, the maturity of an organisation and trust in the solution as capabilities continue to evolve. For example, recommender engines in retail provide a simple illustration where humans may not be involved in each decision. In such cases, solutions need to be highly scalable, making human involvement impractical. On the other hand, it is difficult to envision full automation of interactions with the public during times of crisis or in deciding on first-in-human dosing regimes for drug trials.


Discussing balance, could you share some use cases where the collaboration between AI and humans plays a particularly important role?


  • Richard: High-impact use cases are exactly those where AI and humans work together in harmony – we’ve already seen this in areas such as identifying tumours on patient image scans, where AI is used to spotlight areas of concern for humans to investigate. Similarly, we’re seeing the partial automation of complex processes such as insurance claims, where AI performs a lot of the repetitive tasks and presents humans with a suggested decision and a rationale.


It sounds like the key is carefully analysing how much trust we have in the solution. Is this like building trust with a new team member or supplier, where at first, you practice more control but release it with time if they deliver according to expectations?


  • Seth: Establishing trust in the AI systems we deploy for critical business applications is paramount. Human influence at the outset ensures that we understand how they function, enabling us to guide and train them to deliver consistent, predictable outcomes. As our trust in the solution strengthens over time, our focus on influencing its components will naturally shift. This not only boosts operational comfort but also allows humans to focus on higher-value activities.


Finding the right amount of control should be particularly important for businesses operating in highly regulated industries. How do you see the use of generative AI in these industries?


  • Seth: LLMs provide an amazing new way to interact with data. They process massive data sets and organise information in ways that are simply unapproachable in human terms, measured both in difficulty and time needed to do so. The business processes and workflows within highly regulated industries are often explicitly outlined to ensure consistent results, especially where sensitive data is required. LLMs function exceptionally well with well-defined instruction sets that can be followed clearly, and their ability to manipulate complex data at speed makes them uniquely suited for these kinds of industries.
  • Richard: The missing piece is the elimination of mystery around how an AI solution arrives at its answer, which we have addressed. But once implemented correctly, these systems will be able to accelerate our most dauntingly complex data problems in fields such as finance, energy, healthcare and life sciences and many more.


So, is the primary challenge gaining visibility into what happens with the data? Is there a way we can look inside the black box?


  • Seth: Highly regulated industries must maintain strict adherence to requirements around the data they generate and store. AI tools are creating what appear to be correct outputs, but the path taken to get those answers is not necessarily as clear as it needs to be for these industries to feel comfortable in this regulated space.
  • Richard: Another challenge arises from how data is managed throughout its life cycle. Regulated industries often work with highly sensitive data, with specific regulations around the way in which data is to be collected, managed, protected, used and reported on. Introducing models that are difficult to interpret and document can pose significant challenges in such environments.
  • Seth: The answer is to employ AI systems with a data-first philosophy and rigorous approach to data transparency at every step of operation. Only in doing so can the most stringent requirements be met with confidence.


As you pointed out, having trust is critical. How do you see the relationship between trust and transparency? How can an AI solution overcome this obstacle?


  • Richard: Transparency and trust are inextricably linked: the only way to truly trust an AI system is to know unequivocally how it functions at every step and be provided visibility into those operations.
  • Seth: To provide an example, our agentic AI industry accelerator, that we internally call Morpheus, is designed to provide exactly this level of clarity and confidence. It provides full visibility into how the AI agents are ingesting and transforming data through a robust governance layer. Each time an agent acts upon data, the system captures both that transformation and rich metadata around the operation to create a clear line of sight and understanding around the decisions the agent is taking.


Beyond trust and transparency, businesses should also adhere to regulations. How do you foresee the regulatory landscape around AI evolving in the coming years, and what implications might this have for data transparency?


  • Seth: It should be expected that regulations around AI systems, their usage and the data they create will continue to evolve as we explore the space further. The pace of change is staggering right now, and regulation is always slower than technology typically moves, so it will be incumbent on governments to appoint the appropriate representatives – people who have technology backgrounds and legitimately understand the concerns involved in data operations – to advise on this field.
  • Richard: With the EU AI Act we have seen Europe take a lead in this area, as it has consistently demonstrated a more forward-thinking approach to monitoring corporate action and ensuring data privacy and protection for its citizens. The clearly articulated operations of complex data systems in this regard will hopefully drive the market in a more positive direction toward increased transparency.


Finally, in tackling questions around trust and impacts on industries, it’s essential to address a vital discussion point: the impact on humans. With so many AI opportunities on the horizon, how do you perceive its impact on people?


  • Seth: While AI can impact workforces through the automation of certain job types, which is certainly something to carefully observe and calibrate for, we believe that AI will be an unprecedented enabler of human ingenuity, invention, creativity and promise. The removal of so many time-consuming parts of our working day will create incalculable opportunities to pursue more meaningful challenges and develop deeper relationships with one another as we collaborate. For people working in every occupation, the introduction of AI in meaningful ways can provide better outcomes and allow them to truly do their best work.


No video selected

Select a video type in the sidebar.