Detecting the Undetectable: AI as pharmakon

by Stéphane Grousseau - Deputy Director of the Cyber Defence and Intelligence Agency at Sopra Steria
| minute read

In cybersecurity research labs worldwide, a new kind of race is accelerating. On one side, algorithms generate increasingly convincing deepfakes. On the other, different algorithms train to detect them. This duality evokes the Greek concept of pharmakon: both poison and remedy. Today, artificial intelligence embodies this dual nature in information warfare: simultaneously the weapon creating the threat and the shield attempting to protect us from it.

 

Welcome to the machine

"AI is used for profiling and recommendation algorithms, reminiscent of the Cambridge Analytica era but amplifying control over opinion formation," explains Stéphane Grousseau, Deputy Director of the Cyber Defence and Intelligence Agency at Sopra Steria. This ‘captology’, as he calls it, represents one of four transformative dimensions of AI in information warfare, alongside deepfakes, massification, and agentic AI.

Against these threats, the technological response seems obvious: use AI to detect AI. From Australian laboratories to European research institutes, solutions are multiplying. In Australia, the federal police are developing Silverer, a tool that poisons training data to make creating malicious content more difficult. The country's national research agency automatically stores audio deepfake samples to continually refine its detection capabilities. In 2025, some models achieve over 95% accuracy in detecting manipulated content.

But Grousseau tempers this technical optimism: "We must not mistake the target. What makes information harmful isn't necessarily its creation by AI, but its decontextualisation." An authentic photo used out of context can be as misleading as a sophisticated deepfake. Conversely, truthful information illustrated by AI-generated imagery does not necessarily constitute disinformation.

 

The algorithmic arms race

This nuance reveals an uncomfortable reality: technical detection is just one element of a much larger puzzle. Detection tools, however sophisticated, are part of a perpetual arms race. Each advance in detection drives improvement in generation. The Generative Adversarial Networks (GANs) that create deepfakes learn precisely by attempting to fool detection systems.

"The attack/defence battle will continue to advance," acknowledges Grousseau. The recent example of Sora 2's rapid watermark removal illustrates this dynamic. Once protection is implemented, it's circumvented. "But that's no reason to abandon the fight," he insists.

Defence strategy therefore cannot rely on a single technology. "The best defence is cross-checking technologies with human verification," he emphasises. This multi-layered approach combines several methods: using AI to detect AI, employing purely mathematical approaches that analyse the characteristic "statistical fluidity" of generated content, and maintaining critical human oversight throughout.

 

Trust certification on the way

"An interesting idea is the emergence of an authenticity or trust certificate, similar to watermarking, across an entire content distribution chain," explains Grousseau. This system would function like blockchain: if the certificate remains valid to the final destination, the entire chain would be deemed intact.

This approach requires manufacturer involvement from the point of capture. Camera, video, and smartphone manufacturers would need to integrate cryptographic mechanisms at initial recording. It's a major paradigm shift: rather than detecting the fake, we certify the authentic.

Legislative initiatives are accompanying these technological developments. In the United States, the May 2025 TAKE IT DOWN Act mandates rapid removal of non-consensual deepfake content. The International Telecommunication Union is working on multimedia authentication standards. The European AI Act classifies remote biometric verification as "high-risk," requiring documentation and security testing.

 

Beyond technology: the cognitive dimension

Yet even this multi-dimensional approach is not sufficient. The real vulnerability doesn't lie in algorithms but in our cognition. "Security won't come from an 'informational iron dome' but from each person's capacity to exercise critical thinking," warns Grousseau. "We must adopt a Zero Trust approach: trust nothing and no one."

This reality poses a dizzying question: are we panicking unnecessarily? After all, the arrival of the Internet and Wikipedia didn't collapse our relationship with reality or knowledge. But the expert identifies a fundamental difference: "When the new generation, currently in secondary school, grows up with tools like ChatGPT, it risks setting aside critical thinking and reasoning capacity."

If forthcoming studies demonstrate that young users don't perceive AI-generated content as potentially biased or false, we must prepare for a major cognitive paradigm shift. And its consequences could prove devastating for social cohesion and democracy.

 

The democratised pharmakon

The irony of this technological race lies in its democratisation. Deepfake creation tools are becoming accessible to all, with "fraud kits" available on the dark web for a few hundred dollars. Simultaneously, detection tools are proliferating: DeepFake-o-meter evaluates authenticity across multiple formats whilst Alecto AI helps victims remove abusive content.

This democratisation of both poison and remedy transforms information warfare. It's no longer the preserve of nation-states or sophisticated organisations. "It's an inexpensive method, accessible with minimal resources," observes Grousseau. Anyone can now conduct complex information attacks or defend against them.

For Sopra Steria, this reality imposes a dual role. "We develop technological observatories, specific response capabilities, and tools," describes the expert. But beyond technology, the challenge is educational: "This work of explanation and popularisation is essential. We have organisations like Viginum to do it. It's continuous work. We are the Sisyphus of information warfare."

Search

artificial-intelligence

information-warfare

Related content

AI on the frontline in Iberpay's fight against financial fraud

Incorporating AI into Iberpay’s fraud prevention tool Payguard has improved fraud detection and payment efficiency across Spain and beyond 

TradSNCF: AI to help rail staff welcome Olympic Games travellers

TradSNCF, rail operator SNCF’s AI-powered translation tool, enhances the travel experience for millions of passengers from around the world.

How Norad and Sopra Steria leverage AI and cloud tech to fight child illiteracy

A joint Norad-Sopra Steria project leverages AI and cloud tech to boost child literacy by creating open, accessible education resources.