Humans outside the loop? AI in critical defence operations

| minute read

In military command centres worldwide, a fundamental question is reshaping doctrines and organisational structures: how far can we automate decisions that put lives at stake on the ground? Whilst artificial intelligence promises unparalleled reaction speed and superhuman precision, the debate over keeping humans in the decision loop reveals profound divisions between operational efficiency and ethical imperatives.

The tyranny of speed

"Experimentation has been taking place in the armed forces for some time, particularly in the US Air Force," notes Stéphane Grousseau, Deputy Director of Cyber Defence and Intelligence at Sopra Steria. "The American military is testing quadruped robots equipped with assault rifles, which raises a fundamental question: should we authorise fully autonomous fire or systematically maintain a human in the decision loop?"

This apparently technical question raises dizzying philosophical and operational dilemmas, because if we systematically maintain a human in the decision loop, "the adversary only has to do one thing: be faster than you," observes Grousseau. "And that's where it gets complicated."

In modern information warfare, action is fortunately not lethal. However, speed is not simply a tactical advantage: it is often the decisive element. Disinformation campaigns spread at algorithmic speed, reaching millions of people within hours. Deepfakes can be generated and distributed faster than they can be verified. Toxic narratives contaminate the information space before defenders have even identified the threat.

"When you're subject to an information attack, you don't have months to react," explains Grousseau. "You have hours, sometimes days, and that's already a lot. Often, when you react after a few hours, it's already too late."

This reality creates inexorable pressure towards automation. If the adversary uses AI systems capable of detecting opportunities, generating adapted content and adjusting strategy in real time without human intervention, how can a defence that maintains humans at every decision stage hope to keep pace? "The advantage clearly lies with the attack," observes Grousseau, particularly as the attacker can operate at machine speed whilst the defender slows to the pace of human deliberation.

A continuum rather than a binary choice

Faced with this dilemma, military thinkers have developed "levels of automation", a conceptual framework recognising that the question is not binary but exists on a continuum. "There are several levels," says Grousseau. "Human in the loop, where nothing happens without human validation. Human on the loop, where humans supervise and can take back control, and human outside the loop, where the machine decides alone."

In the context of air defence, these levels translate concretely. The first model prioritises human control at the cost of reaction speed. The second allows automatic target engagement according to predefined parameters, whilst maintaining constant human supervision. The third offers maximum speed, but also maximum risk.

We now better understand the temptation of complete automation. But can we truly justify delegating life-or-death decisions to machines for efficiency reasons? Philosophers and legal scholars have long debated "meaningful human control", which assumes that an individual somewhere in the chain of command understands the implications, can be held accountable and has the authority to intervene.

Therefore, we might wonder what this control means concretely when decisions must be made in milliseconds? When a human "on the loop" doesn't have time to understand the situation before action is engaged? Does human control remain "meaningful" when reduced to a simple emergency stop button? Or is it merely illusory, as human action becomes reduced to automatic validation of machine-proposed actions?

Information warfare as testing ground

These dizzying questions in the context of lethal weapons become somewhat more manageable in the information domain. "Having systems capable of detecting what's happening in real time and reacting in an automated manner on the information warfare front, why not?" asks Grousseau.

The distinction is crucial. An automated counter-campaign that targets incorrectly may cause diplomatic damage, but it does not kill directly. This fundamental difference opens an experimentation space for automation levels that would be ethically unacceptable in the kinetic domain. "Such campaigns can have consequences in the kinetic domain," acknowledges Grousseau, "but the effects, if they exist, are indirect."

Facing these dilemmas, Sopra Steria has developed a stratified automation approach. Detection and analysis operate largely in automatic mode, with too much data arriving too rapidly for constant human supervision. Systems then alert human analysts and provide action recommendations. For certain well-understood attack types, predefined countermeasures can be deployed automatically. But strategic decisions on narratives, audiences or political implications remain firmly under human control.

This responsible automation requires robust safeguards. Sopra Steria has developed a platform offering response patterns, configurable before operational deployment. "Combining technologies, approaches and verifications: this is the method that will make information reliable," adds Grousseau. Systems must also continuously learn from their successes and failures, whilst maintaining strategic human oversight.

Towards a doctrine of responsible automation

The future of information warfare will be neither entirely automated nor entirely manual. It will sit in a dynamic balance that will constantly evolve according to threats, technological capabilities and ethical norms.

"The combination of human expertise and artificial intelligence creates a new defence paradigm, if only because there isn't a single response pathway," concludes Grousseau. This paradigm recognises that different decision loops require different levels of human control. In some, complete automation may be appropriate. In others, human judgement remains indispensable.

The art of modern information defence consists of building systems that maintain humans exactly where their judgement provides the most value. In this vision, the artificial intelligence tool is neither the replacement for humans nor their simple instrument; it becomes a partner in a defence system capable of matching the pace of an adversary who does not wait.

Search

artificial-intelligence

Related content

AI on the frontline in Iberpay's fight against financial fraud

Incorporating AI into Iberpay’s fraud prevention tool Payguard has improved fraud detection and payment efficiency across Spain and beyond 

TradSNCF: AI to help rail staff welcome Olympic Games travellers

TradSNCF, rail operator SNCF’s AI-powered translation tool, enhances the travel experience for millions of passengers from around the world.

How Norad and Sopra Steria leverage AI and cloud tech to fight child illiteracy

A joint Norad-Sopra Steria project leverages AI and cloud tech to boost child literacy by creating open, accessible education resources.