Securing AI Agents Against Emerging Cyber Risks

The adoption of artificial intelligence agents is accelerating across organisations.

By 2028, more than 30% of enterprise applications are expected to integrate agentic AI capabilities.

Unlike large language models (LLMs) or traditional assistants, these agents perceive, plan and act autonomously, sometimes without direct human supervision.

This autonomy introduces unprecedented cyber risks: unintended actions, abuse of privileges, poorly governed non-human identities, lack of traceability, and absence of proof of execution. Yet most security strategies remain primarily focused on human users.

With AI agents, security is no longer just about filtering responses, but about governing intent, action and evidence.

Conord Pierrick Conord

AI Cyber Expert, Sopra Steria

In response to this shift, CIOs and CISOs must adopt a structured approach to:

  • distinguish between assistants, LLMs and AI agents in order to adapt the cyber security posture,
  • secure agentic architectures (RAG, tools, APIs, triggers),
  • govern non-human identities and their privileges,
  • ensure traceability, observability and accountability of actions,
  • align with emerging frameworks (AI TRiSM, MAESTRO, AIVSS) and forthcoming regulatory requirements.

Through this white paper, Sopra Steria analyses emerging threats, highlights the most secure architectures, and shares its convictions on how to integrate AI agents in a controlled, secure and responsible manner.

Download the white paper: