The adoption of artificial intelligence agents is accelerating across organisations.
By 2028, more than 30% of enterprise applications are expected to integrate agentic AI capabilities.
Unlike large language models (LLMs) or traditional assistants, these agents perceive, plan and act autonomously, sometimes without direct human supervision.
This autonomy introduces unprecedented cyber risks: unintended actions, abuse of privileges, poorly governed non-human identities, lack of traceability, and absence of proof of execution. Yet most security strategies remain primarily focused on human users.
With AI agents, security is no longer just about filtering responses, but about governing intent, action and evidence.
Conord Pierrick Conord
AI Cyber Expert, Sopra Steria
In response to this shift, CIOs and CISOs must adopt a structured approach to:
Through this white paper, Sopra Steria analyses emerging threats, highlights the most secure architectures, and shares its convictions on how to integrate AI agents in a controlled, secure and responsible manner.