How can Generative AI help us in terms of IT Security?

by Marius Sandbu - Sopra Steria Nordics Lead Cloud Architect MVP
| minute read

Updated analysis from Marius Sandbu, Cloud Evangelist at Sopra Steria. This article reflects the current state of AI in cybersecurity as of 2025. Given the rapid pace of development in this field, organizations should expect continued evolution in both defensive capabilities and threat vectors.

The cybersecurity landscape has transformed dramatically. The rapid evolution of generative AI has introduced powerful new capabilities for both defenders and attackers, fundamentally reshaping how we approach IT security. From autonomous agents that can triage thousands of security alerts to sophisticated deepfakes that can bypass even the most vigilant human verification, the stakes have never been higher. Or the opportunities more promising.

The evolution of generative AI models

The ecosystem of generative AI models has expanded exponentially over the past eight months. Where models like ChatGPT could initially handle around 2,500 words, today's advanced models such as Google Gemini can process up to 5 million words in a single context. This represents a 2,000-fold increase in information processing capabilities, enabling unprecedented analysis of complex security scenarios.

The breakthrough has been in three key areas: scale, accuracy, and cost-effectiveness. Current models are not only handling vastly more information but doing so with significantly reduced hallucination rates. Perhaps most importantly, the cost of running these models has dropped substantially, making advanced AI security tools accessible to organizations of all sizes.

Most transformative has been the introduction of computer control capabilities. These models can now interact directly with systems, taking control of interfaces and executing tasks just as human operators would. This means AI can now move beyond advisory roles to become active participants in security operations.

The rise of autonomous security agents

The most significant development since October 2024 has been the emergence of autonomous security agents—AI systems that can act independently to protect digital infrastructure. This represents a fundamental shift from reactive to proactive security posture.

Microsoft's introduction of Security Copilot agents in March 2025 exemplifies this evolution. These agents can:

  • Automatically triage phishing reports: The Phishing Triage Agent processes over 95% of user-submitted phishing reports autonomously, filtering false positives and allowing security teams to focus on genuine threats
  • Optimize access controls: The Conditional Access Optimization Agent continuously monitors user access patterns and automatically adjusts security policies
  • Manage vulnerability remediation: Agents can prioritize and coordinate patching across enterprise environments

Early adopters report remarkable results. Organizations using Security Copilot have seen a 30% reduction in mean time to resolution, with junior security analysts becoming 26% faster and 35% more accurate in their threat response.

Enhanced threat detection and code analysis

The capability to analyse potentially malicious code has reached new levels of sophistication. Where security teams previously needed to set up isolated sandbox environments to understand suspicious scripts, AI can now provide detailed analysis in seconds.

This capability has become particularly crucial as 41% of all code is now AI-generated, with Google reporting that 25% of their internal codebase comes from AI systems. However, this efficiency comes with new risks: studies show that at least 48% of AI-generated code snippets contain vulnerabilities, highlighting the critical need for advanced analysis tools.

GitHub's latest security features demonstrate the potential for AI-powered code review. Their automated pull request system can identify common security issues like buffer overflows and SQL injection vulnerabilities, while advanced language models can detect embedded secrets that traditional regex-based tools miss.

The deepfake threat landscape

The sophistication and accessibility of deepfake technology has grown alarmingly. What once required specialized knowledge and expensive equipment can now be accomplished with consumer-grade hardware and freely available software. Creating a convincing voice clone requires just two minutes of audio, while visual deepfakes can be produced in under 45 minutes using open-source tools.

The financial impact is staggering. The UK engineering firm Arup lost $25 million in early 2024 when an employee was convinced to transfer funds during a video call with what appeared to be senior management, but were actually AI-generated deepfakes. This wasn't a technical system breach but sophisticated social engineering that exploited human trust in familiar voices and faces.

The threat extends beyond financial fraud. Social engineering attacks leveraging AI have surged 442% in the second half of 2024, with attackers using AI to create personalized phishing campaigns that can adapt in real-time to victim responses.

The vulnerability explosion

The integration of AI into development workflows has created an unprecedented challenge: the volume of software vulnerabilities is exploding. CVE reports reached over 22,000 in 2024, a 30% increase from 2023. This coincides with the widespread adoption of AI coding assistants, which have grown from 40% usage in 2021 to becoming near-universal among developers.

The correlation raises important questions about the security implications of AI-generated code. While AI dramatically improves developer productivity, it also inherits and amplifies the security mistakes present in its training data. As models learn from existing code (including code with vulnerabilities) they risk perpetuating these flaws at scale.

New attack vectors and defensive strategies

The same AI capabilities that enhance security also create new attack vectors. Malicious actors are leveraging AI to:

  • Automate vulnerability discovery: AI can scan codebases faster than human researchers, potentially finding and exploiting zero-day vulnerabilities before defenders can patch them.

  • Generate sophisticated malware: Even attackers with minimal coding skills can now create complex malicious software using AI assistance.
  • Conduct targeted social engineering: AI can analyze social media profiles and public information to craft highly personalized phishing campaigns.

The financial services sector has been particularly impacted, with bank call centers overwhelmed by AI-generated voice cloning attempts to access customer accounts.

The EchoLeak warning

Perhaps most concerning is the emergence of vulnerabilities specific to AI agents themselves. The recent EchoLeak vulnerability discovered in Microsoft 365 Copilot represents the first known "zero-click" attack on an AI agent—where an attacker can access sensitive information simply by sending an email, without requiring any user interaction.

This vulnerability highlights a fundamental design challenge: AI agents, by their nature, need broad access to organizational data to be effective. This same access creates new attack surfaces that traditional security frameworks weren't designed to address.

Recommendations for organizations

Based on current trends and emerging threats, security leaders should:

Embrace the fundamentals: The rapid pace of AI development makes it tempting to seek complex solutions, but the foundation remains solid data governance, robust information security measures, and comprehensive employee training.

Stay agile: The AI security landscape is evolving too rapidly for static strategies. Organizations need to maintain flexibility in their security architecture and be prepared to adapt quickly as new threats emerge.

Focus on human-AI collaboration: Rather than replacing human security analysts, the most effective approach is augmenting human capabilities with AI tools. This maintains the critical human oversight necessary for complex decision-making while leveraging AI's speed and scale advantages.

Implement zero-trust for AI: As AI agents gain more autonomous capabilities, apply zero-trust principles to limit their access and require verification for sensitive operations.

Invest in detection and response: With attacks becoming more sophisticated and harder to prevent, robust detection and rapid response capabilities become critical. AI-powered security tools can help identify anomalies and respond to threats at machine speed.

Looking forward

The next phase of AI in cybersecurity will likely see the emergence of multi-agent systems: networks of specialized AI agents working together to protect digital infrastructure. These systems could potentially match the sophistication of advanced persistent threats while operating at the speed necessary to counter AI-enabled attacks.

The challenge for security leaders is maintaining the delicate balance between leveraging AI's protective capabilities and managing the new risks it introduces. As AI becomes more autonomous and capable, the stakes of getting this balance right have never been higher.

The future of cybersecurity will be determined not by avoiding AI, but by mastering its application in defense while remaining vigilant against its malicious use. In this new landscape, the organizations that thrive will be those that can harness AI's power while maintaining the human judgment and oversight that remain irreplaceable in the face of complex, evolving threats.
Search

artificial-intelligence

cybersecurity

Related content

AI on the frontline in Iberpay's fight against financial fraud

Incorporating AI into Iberpay’s fraud prevention tool Payguard has improved fraud detection and payment efficiency across Spain and beyond 

TradSNCF: AI to help rail staff welcome Olympic Games travellers

TradSNCF, rail operator SNCF’s AI-powered translation tool, enhances the travel experience for millions of passengers from around the world.

How Norad and Sopra Steria leverage AI and cloud tech to fight child illiteracy

A joint Norad-Sopra Steria project leverages AI and cloud tech to boost child literacy by creating open, accessible education resources.