Article
3 min read
Andy Rossiter
  • Global SVP of Google Cloud at Endava

The enterprise of tomorrow won't just be powered by human talent; it will be driven by an ‘unseen workforce' of AI agents. From automating complex workflows on Google Cloud to orchestrating multi-step business processes, these AI agents are poised to become indispensable. But as they gain autonomy and access sensitive data, a critical question emerges: How do we secure them? 

 

Traditional security training, access policies and incident response frameworks were designed for humans. We now need to reimagine them for our AI counterparts. 

 

Who is training our AI agents? The new security trainers 

 

Just as humans require onboarding and security awareness training, AI agents need ‘security literacy' embedded from their inception. This isn't about traditional classroom training; it's about: 

 

  • Secure prompt engineering: Those crafting the instructions for AI agents are the new security trainers. They must understand prompt injection risks, data sanitization and how to define boundaries that prevent malicious or unintended actions. 
  • Curated data sets: The data an AI agent learns from implicitly dictates its ‘behaviour’. Security teams must collaborate with data scientists to vet training data for bias, vulnerabilities and potential for model poisoning. 
  • Secure foundation models: Organisations will increasingly leverage specialized foundation models. Understanding the security posture of these underlying models and their training data provenance becomes paramount. 

 

Do AI agents understand security? Defining ‘trustworthy behaviour' 

 

AI agents don't 'understand' security in the human sense. Instead, we must define and enforce what constitutes ‘trustworthy behaviour' for an agent. This involves: 

 

  • Least privilege for agents: An AI agent performing a task should only have the exact permissions required, no more. If an agent's role is to analyse sales data, it shouldn't have delete access to production databases. 
  • Behavioural baselines & anomaly detection: We must establish what ‘normal' behaviour looks like for each agent. Any deviation – an agent attempting to access an unauthorized resource or initiating an unusual external connection – should trigger an immediate alert. 
  • Output validation & guardrails: Agents must operate within defined constraints. This includes validating outputs to prevent accidental data leakage or malicious code generation, especially when interacting with external systems. 

 

Proving security to regulators: AI governance in the cloud 

 

Regulators are keenly watching AI, particularly its ethical implications and data handling. Proving the security of your AI workforce will require auditable processes and robust tooling.

 

  • Data lineage & access logs: Knowing where an agent's data came from, who authorised its use and every action it took on that data is non-negotiable. 
  • Explainable AI (XAI) for security context: While not full explainability, being able to trace an agent's ‘reasoning' for a security-sensitive action will be vital for audits. 
  • Continuous compliance monitoring: AI agents are dynamic. Their configurations and permissions can change. Continuous monitoring ensures they remain compliant with evolving regulations. 

 

Google Cloud frameworks for the unseen workforce 

 

Securing this new workforce requires leveraging Google Cloud's robust security ecosystem: 

 

  • Identity and access management (IAM): Critical for establishing least privilege. Every AI agent (or the service account it runs under) needs a clearly defined identity and role. 
  • Vertex AI workbench & pipelines: For controlling the secure development and deployment of agents. Ensuring code and data are scanned and immutable. 
  • Google SecOps (Chronicle Security Operations & Mandiant): The ‘Sentinel' for AI agents. Ingesting agent activity logs at scale to detect anomalous behaviour, unauthorized actions or signs of compromise in real-time. Mandiant intelligence can alert to new agent-specific attack vectors. 
  • Wiz (Cloud Native Application Protection Platform - CNAPP): The ‘Architect' for AI agents. Provides deep, agentless visibility into the security posture of the underlying infrastructure that hosts these agents (e.g., GKE, Cloud Run, serverless functions). It identifies misconfigurations, vulnerabilities in container images and ensures the data agents access is protected at rest and in transit. 
  • Data loss prevention (DLP) API: For scanning and redacting sensitive information that AI agents might inadvertently process or generate, especially when interacting with external or public-facing interfaces. 
  • Assured workloads: For meeting specific compliance requirements by isolating workloads into a controlled environment with specific policy and data residency guarantees. 

 

Secure the future of work: building trust in the age of intellligent agents

 

The ‘unseen workforce' of AI agents offers unparalleled opportunities for efficiency and innovation. By proactively building security into their lifecycle – from training and trusted behaviour definition to continuous monitoring and robust compliance frameworks – we can unleash their full potential with confidence, not fear. The future of work is secure, and it's powered by intelligent agents.