An object detection algorithm developed for an anti-drone system achieved performance gains of around 30% by applying aeronautical certification and frugality requirements from the design stage, says Olivier Alquier, Technical Director of the Industry
BU at Sopra Steria's CS Group, who believes that in critical systems, constraining artificial intelligence makes it more efficient.
Today, we face two very different views of artificial intelligence. On one side are the large generative models generalist, probabilistic systems trained on billions of unqualified data points. On the other side is a radically different approach ultra-specialised
algorithms, designed from the outset to be mastered end-to-end, thereby opening the door to certification and trust.
"Artificial intelligence encompasses multiple approaches symbolic, machine learning, deep learning, generative and hybrid," explains Olivier Alquier, Technical Director of the Industry BU at Sopra Steria's CS Group. "ChatGPT illustrates generic conversational
generative AI, which is difficult to compare with our developments in embedded and specialised AI."
This difference is not just philosophical. It determines what can or cannot be embedded in a critical aeronautical system. "Certification and frugality are both needed," says Alquier. "The first requires mastering all possible behaviours. However, the
simpler a system is, the less difficult it will be to certify. And that is precisely the objective of frugality, which pushes us to reduce unnecessary complexity to return to the exact need."
Determinism vs probability: the fundamental incompatibility
Aeronautical certification is based on two intangible principles, determinism and reproducibility. A system must always produce the same output for the same given input. "If you ask the question 'what colour is the sky', the expected answer must be defined
in advance. It must answer blue. If it answers light blue, it is not compliant," says Alquier.
Large language models create several obstacles to this requirement, he says. "They are massive models, completely opaque, trained on immense volumes of heterogeneous data whose quality and origin are not controlled."
Their probabilistic nature creates a primary problem, which is that the very functioning of these models is based on statistical prediction, not certainty. "Ask the same question twice and you will not get exactly the same answer," he says.
This variability makes the exhaustive validation required by certification extremely difficult. But this is not the only obstacle. The traceability of training data poses an equally critical challenge. "We do not know with which data these models were
trained, which makes it impossible to explain their results," says Alquier. Without this traceability, no certification can be considered.
Ultra-specialised models: mastery through sober design
Sopra Steria has adopted a radically different strategy, based on eco-design and algorithmic sobriety. "Our algorithms are always designed for very specific use cases, with technologies that we fully master," explains Alquier.
They perform a precise function, such as detecting drones in camera images, predicting the wear of an aircraft oil filter or identifying a landing runway. "This approach translates into specific models rather than generic ones, trained to perform a precise
function, based on qualified and controlled data. It means they are robust models in which we can have confidence." This rigour also applies to the architecture used. The result is explainable, testable and therefore certifiable AI.
Frugality: a design principle in its own right
But this frugality results in concrete environmental and operational benefits too. According to the Shift Project, training a large language model can consume as much electricity as 130 French households over one year. Meanwhile, in use, a query on ChatGPT
consumes ten times more energy than a standard Google search. "These large models represent a considerable environmental impact," says Alquier. "Even for a simple text summary, the energy cost remains substantial."
This reality requires moving beyond technological blindness to develop AI based on the measurement and transparency of uses, the sobriety and eco-design of systems, as well as reasoned use guided by a genuine carbon return on investment. "Frugality is
not a limitation, it is a design principle in its own right. It forces us to focus on what is essential and to optimise every component. It is an approach that reconciles innovation and environmental transition in our development strategies."
Certification requires mastery, mastery requires simplicity and simplicity naturally leads to frugality. This virtuous chain transforms a regulatory constraint into a simultaneous lever for performance, sustainability and sovereignty.
30% performance: when certification requirements improve models
The effectiveness of this approach can be shown through a concrete example. In collaboration with ONERA, CS Group developed a machine learning model for an anti-drone system by rigorously following the recommendations of DO-178C, the reference standard
for critical aeronautical software. The work is part of the preparatory efforts for ARP6983, the new standard currently being finalised that will specifically regulate the use of AI in embedded systems.
"The algorithm was much more frugal and controlled than the initial one," says Alquier. "The direct consequence was significantly more efficient recognition, particularly in the detection of drones in complex situations, where we saw improvements of up
to 30%."
These results were accompanied by a clear improvement in latency times. "Indirectly, the rigour imposed by the regulatory framework also brings performance gains," he says. "We consume fewer resources, we focus solely on system requirements, and we eliminate
functionalities that are unnecessary for our use. We master what we do."
The work was presented at the international ERTS 2024 conference and co-signed by CS Group, IRT Saint-Exupéry and ONERA. However, this model is designed only to detect drones, explains Alquier. Ultra-specialisation is the price to pay to ensure
quality, robustness and to provide the guarantees required by certification.
This approach nevertheless requires sustained investment. That is why Sopra Steria is a founding member of Confiance.AI, while CS Group reinvests 10% of its revenue into research and development, particularly within the ANITI AI cluster, where the company
sits on the scientific council and is involved through a chair dedicated to embedded and certified AI. "The requirements of the aeronautical sector have pushed research toward certified embedded AI," explains Alquier. "Other sectors, such as defence,
benefit from these advances in certification matters."
Algorithmic sovereignty: beyond the technical dimension
Total control of the algorithmic chain goes beyond purely technical issues, it conditions European digital sovereignty, based on technological mastery, sustainability, openness and trust. Designing sober and controlled models reduces dependence on massive
infrastructures and non-European technologies. Frugality therefore becomes a concrete lever for strategic independence.
To illustrate the risk involved, consider the example of the CLOUD Act, which allows US authorities to access the data of any American company, wherever it is stored in the world, without the obligation to inform its owners. "The use of American technologies
raises obvious issues of sovereignty, but also of protection of sensitive data and compliance with European GDPR," says Alquier.
And that can mean a worrying scenario. "Imagine that, attracted by ready-to-use technological expertise, Europe adopts AI developed by an American company for its defence systems. If a conflict of interest emerges, the United States could access, without
informing Europe, all the strategic data used by this AI. They could also impose usage restrictions or limit the functionalities of the critical system, which could then be constrained at the worst possible moment," warns Alquier.
He says that precedents already exist. "The Americans are already restricting China's access to advanced NVIDIA GPUs in order to prevent their use in the development of Chinese AI."
There are two opposing views on this essential issue. "Europe is working to dissipate the fog that AI casts over our societies and uses this new technology only for uses where visibility is sufficient. Other nations move forward in the fog," he says.
"These two approaches are not incompatible and could be mutually beneficial. But this difference raises questions about the possible scenarios when the fog eventually lifts."
"Europe must face a complex challenge, remaining in the AI race while preserving its fundamental values and its sovereignty," he says. It's for this reason that Sopra Steria is involved in developing independent, reliable and robust solutions, reconciling
digital and environmental transitions within a coherent strategy. In this way, every regulatory constraint can be transformed into a lever for optimisation and enable progress with confidence — even in the fog.