Latency, bandwidth, autonomy: why industrial AI is moving to the edge<

by Martin Stolberg - Edge AI specialist at Sopra Steria
| minute read

Cloud AI struggles with industrial reality: when quality control decisions take milliseconds and defective parts move at production speed, centralised processing arrives too late. Martin Stolberg, Edge AI specialist at Sopra Steria, explains how German manufacturers are deploying on-site intelligence systems to eliminate latency, maintain operational autonomy, and secure their critical production data.

On a modern automotive production line, every component undergoes dozens of quality control checks before final assembly. Visual inspection systems scan thousands of data points per second, comparing measurements against exacting standards. Any deviation must trigger an immediate response. By the time the data travels to a distant cloud server and back, the defective part has already moved several metres down the line.

To combat this latency problem, manufacturers are deploying a new generation of on-site computing facilities that process data directly at the point of production, known as edge AI. These distributed systems eliminate the delay inherent to centralised cloud infrastructure, enabling the split-second decision-making that modern industrial operations demand.

Martin Stolberg, Edge AI specialist at Sopra Steria, works with Germany's flagship companies to deploy these local intelligence systems. His perspective offers practical insights into why edge computing is becoming essential for industrial operation, and what challenges companies face when implementing distributed AI.

Which industrial sectors show the strongest demand for edge AI?

Martin Stolberg: The demand aligns closely with Germany's industrial strengths: manufacturing, automotive, chemicals, and energy. Basically, wherever you have physical processes with edge interfaces that require immediate decision-making. We're seeing particularly strong interest from automotive manufacturers, where every production step concludes with quality control: visual, tactile, or electrical inspection.

But edge AI extends beyond manufacturing. The power outage in Berlin on January 3 highlighted another critical application: infrastructure management. Critical infrastructure must be protected and maintained through decentralised control systems. These perfectly demonstrate edge AI's benefits because these systems need to operate autonomously without depending on central connectivity.

Can you describe a specific scenario where centralised cloud AI simply couldn't work?

Martin Stolberg: Take automotive quality control. When you're processing either parts of a car or the complete car, what comes at the end of every step is an inspection: quality control of that particular step. Companies process enormous volumes of data during these inspections, but they don't want to transfer this data across multiple networks or have it processed in central data centres they don't control.

Beyond security concerns, there's the physics problem. Sometimes you have those high volume, high frequency methodologies, and from a speed perspective, it's simply technically not feasible because of latency times and data transfer times. If the system takes too long to make a decision and the part has already moved one metre further down the line, it's simply too late. This is why edge AI — with simpler models, faster processing times, relatively simple hardware, and less data transfer — is so attractive for industry.

What technical hurdles does your team face when deploying AI models on edge devices with limited resources?

Martin Stolberg: Our expertise really lies in applying fine-tuned models that specifically serve the purpose of a particular edge AI application. The key is knowing which levers to move to achieve good outcomes with reasonable effort. It's about model optimisation rather than trying to force large, complex models onto constrained hardware.

The bigger challenge often isn't the initial deployment, but what comes after. If you take the overall effort as 100%, 10% of this is finding a demo or POC (Proof of Concept). 20% is the pure implementation setup, and the remaining 70% is typically maintenance, upgrades, and so on. This 10-20-70 rule has been underestimated for a long time, and it holds true for edge AI applications. Companies need to think about the full lifetime perspective of a particular application and the related hardware.

How do you manage model updates across distributed edge deployments with dozens or hundreds of devices?

Martin Stolberg: You definitely don't handle them one by one. It requires orchestration, similar to what we have in end-user computing. You need central orchestration and automated deployment. Some devices connect through the company network, while others use mobile connectivity (5G or 6G connectors) which allows for automated updates to AI applications.

The challenge is understanding the entire ecosystem and knowing how to roll things out — and critically, how to roll back if needed. It's really managing hundreds of compute devices with different connectivity profiles and operational requirements.

From a cybersecurity perspective, what are edge AI's advantages, and does it create new vulnerabilities?

Martin Stolberg: There are two sides. On the plus side, you have by design less data transfer because everything happens on the edge, and you have typically very small amounts of data that are moved within the edge device. From a security perspective, this is excellent because it prevents security flaws by design.

On the other side, you still have updates, logging, and feedback loops. As you may have hundreds of devices, there's a low chance to grab particular data, but as an attacker, you can get a statistical overview of what is going on, how production-intensive certain processes are. So you create other opportunities for attackers to obtain data, albeit in different forms than traditional centralised systems.

How do we prevent this? It's fundamentally cybersecurity with edge AI orchestration in mind from the very beginning: applying the same principles we deploy in other industries but building them into the architecture from day one.

How do digital twins fit into edge AI deployments?

Martin Stolberg: Digital twins are extremely valuable. In automotive, you might have a digital model of the manufactured product, a model of the production process, and a digital twin of the software stack and architecture itself. We're actually working with a prominent German car manufacturer on a live application using digital twin models to properly design systems.

The real value comes when you have an existing design and need to add updates and extensions over time. How do these extensions fit into existing models? Do they create any conflicts you might not anticipate when designing a particular extension? We call these "engineering copilots": they help properly integrate new capabilities into existing environments without breaking what already works.

Looking ahead two to three years, what emerging capabilities do you see from the convergence of edge AI with other Industry 4.0 technologies?

Martin Stolberg: Currently, we typically have single-point edge AI systems, sometimes orchestrated along certain manufacturing or process optimisation steps. What we don't have yet is the full universe of edge AI endpoints working together: across factory sites, across maybe not only that single company, but with a network of suppliers.

The future is really about creating an ecosystem of federated data across AI systems, invoking intelligence from one company, its suppliers, maybe external consultancies like Sopra Steria, to jointly work on this ecosystem of edge AI devices. You could move from optimising individual processes to making real-time decisions across entire production networks.

What practical advice would you give to a company planning its first edge AI deployment?

Martin Stolberg: Three critical points. First, have in mind this is a very fast-developing technology. What might be state-of-the-art today, may no longer be cutting-edge in six months' time. Start with a single process line within your manufacturing, but design with the understanding that you'll eventually want the full picture across all your edge devices.

Second, while everyone's keen on large language models, small machine learning applications and small language models will gain more momentum in the coming months and years. These can enable visual recognition of more complex patterns than what's applied today.

Third, and this advice is often repeated but none the less critical, start small and think big. Have that evolving technology landscape in mind. And remember that 10-20-70 rule: the real work isn't in the proof of concept but in maintaining operations over the long run. Setting things up properly from an operations and maintenance perspective will prevent some disappointments.

Search

artificial-intelligence

Related content

AI on the frontline in Iberpay's fight against financial fraud

Incorporating AI into Iberpay’s fraud prevention tool Payguard has improved fraud detection and payment efficiency across Spain and beyond 

TradSNCF: AI to help rail staff welcome Olympic Games travellers

TradSNCF, rail operator SNCF’s AI-powered translation tool, enhances the travel experience for millions of passengers from around the world.

How Norad and Sopra Steria leverage AI and cloud tech to fight child illiteracy

A joint Norad-Sopra Steria project leverages AI and cloud tech to boost child literacy by creating open, accessible education resources.