Article
5 min read
127:03:36
Scott Holman

Many organisations are adopting AI at speed, running co-pilots, automation agents and third-party AI features embedded in the tools their teams use every day. However, due to security-second thinking, many leaders found deployments happened faster than the governance and security frameworks could be built around them.

 

Recent research found that 71% of UK employees have used unapproved consumer AI tools at work, and 51% continue to do so every week. This pattern points to a large and growing inventory of AI usage that sits outside approved tooling, known as shadow IT/AI, outside identity controls and, in some cases, inside approved platforms but beyond centrally applied controls. In practice, this is often the result of both policy and tooling gaps: employees move faster than approved AI services can be rolled out, while controls such as identity management, admin permissions or agent-to-agent access policies are not yet consistently applied across the estate.

 

This gap between AI adoption and AI security maturity is already beginning to show in day-to-day operations. The solution is not to slow adoption, but to make sure security and governance can keep up as AI moves from pilots to everyday operations.

 

The challenge of scale

 

Controls that work for a single, well-scoped AI pilot do not automatically scale to an enterprise AI estate. A manually reviewed prompt policy for one internal tool, for example, becomes unmanageable across fifty. An access review process designed for a handful of service accounts breaks down when the number of agents is in the hundreds and their tool access changes with each new use case.

 

The challenge is greater in agentic environments, where orchestration is often non-deterministic, and tool use, decision paths and system behaviour can vary from one execution to the next.

 

At scale, governance must be repeatable, auditable and demonstrable across the whole AI estate. And for organisations operating in or selling into the EU, it is no longer optional. AI security operations now carry a regulatory component, which includes showing that AI systems have been assessed, that controls are in place and that staff have appropriate AI literacy.

 

To achieve this, leaders must build the processes, tooling and organisational habits that govern AI at the speed and scale at which the business is deploying it. We’ve outlined three operational loops that provide a practical structure for approaching this.

 

Loop 1: Discover and design

 

You cannot govern what you cannot see. The starting point for AI security operations is building and maintaining an accurate, continuously updated inventory of AI services, models, agents, data flows and identities across the organisation.

 

That inventory must also account for embedded AI capabilities within existing tools, including systems that rely on SLMs or LLMs behind the scenes, sometimes even to validate decisions made by those same models.

 

However, this can be more challenging than it sounds. AI surfaces sprawl quickly with teams integrating APIs or adding a new AI-enabled product to the stack. Without a discovery process that actively surfaces these additions, the inventory can quickly become redundant.

The inventory must define the blast radius of each system. This means not just defining the AI's existence, but its data access permissions, operational identities and its integration with business-critical infrastructure. AI security posture management (AI-SPM) tools are beginning to address this gap by mapping AI assets to attack paths. Wiz AI-SPM, for example, provides visibility into AI pipelines, model configurations, data connections and identities, surfacing attack paths that cross from the AI layer into the underlying cloud environment. This can be complemented by data security posture management (DSPM), which helps identify and classify the secure data underpinning AI systems. Together, these provide the context needed to assess risk more accurately and reduce the risk of leakage.

 

Shadow AI deserves particular attention during discovery. Employees using personal accounts to access generative AI tools are not malicious and are often seeking a productivity boost or to ease friction. The operational response is to reduce that friction by providing approved, governed alternatives and to surface the ungoverned usage through monitoring so it can be brought into scope.

 

Loop 2: Fortify and enforce

 

The second loop translates the inventory into active controls: who can use which AI systems, under what conditions, with access to which data. These foundations rely on the same controls used to govern any cloud workload, such as identity and access management, service perimeter controls and organisational policies. For agentic systems specifically, enforcement needs to extend to tool permissions and inter-agent communication, not just the access credentials attached to the agent's identity.

 

An additional layer specific to AI lies in prompt and response, sitting between users and models to filter inputs that might inject instructions, and scan responses for content that should not leave the system. For example, Google Cloud's Model Armor helps safeguard generative AI applications by screening and filtering prompts and responses for risks such as prompt injection, jailbreaking, sensitive data leakage and harmful content.

 

Loop 3: Monitor and recover

 

The third loop closes any gaps by ensuring that the SOC can see, correlate and act on AI-relevant signals.

 

AI security events have different signatures from traditional network or endpoint events. Relevant signals include prompt API call volumes and patterns, tool call sequences from agents, model access logs and cost anomalies indicating runaway agent loops. Platforms like Google Security Operations consolidate security information and event management (SIEM), security orchestration, automation and response (SOAR) and threat intelligence in a single environment, normalising diverse log sources, including cloud and AI telemetry, against a common data model.

 

Response playbooks also need updating. An agent exhibiting anomalous tool call behaviour is not the same incident type as a compromised user credential, and the containment steps are different. Isolating an agent's tool access, revoking a managed identity or rolling back an automated workflow requires playbook steps that most organisations have not yet written.

 

Partnering for security excellence

 

Operationalising AI security is rarely a single-tool decision. It involves architecture choices, identity design for agents, centralised guardrails and SOC processes that can detect and respond to AI-specific signals.

 

As both a Google Cloud and Google Security partner, we also partner with Wiz, giving our customers access to a unique partnership ecosystem of experts. With this insight, we help organisations apply the right controls consistently across their AI estate, from discovery and AI posture management through to prompt and response protection and SOC integration.

This approach enables teams to move quickly with AI, while keeping identity, data access and monitoring under continuous control.

 

For a deeper look at the operating model, compliance considerations and how to build AI security capabilities that scale, download our e-book.