Harnessing the power of AI in IT operations

Artificial intelligence has moved from experiment to foundation in IT operations and managed services. For Digital Platform Services (DPS) at Sopra Steria, AI is now central to how infrastructures are modernised, how complex ecosystems are run at scale and how service quality is maintained in a tightening regulatory environment.

This position paper, Harnessing the power of AI, sets out DPS’s view of AI-powered operations: where the market is heading, which capabilities matter most, and how a European provider can deploy them responsibly while respecting sovereignty and sustainability constraints.

A fast‑growing market, still under‑exploited

The paper starts from a simple observation: AI’s potential contribution to the global economy is projected to reach 4.8 trillion dollars by 2033, yet only a fraction of organisations have captured this value in production. The worldwide AI market itself is expected to grow from 294.16 billion dollars in 2025 to 1,771.62 billion dollars by 2032, with a compound annual growth rate of just over 29 per cent.

 
 

Behind these headline figures lies a marked shift in how enterprises think about AI. By 2026, around 80 per cent of organisations are expected to invest in AI to obtain real‑time insights from data, and by 2027 half of business decisions could be augmented or automated by AI agents. At the same time, market analysts point to rapid growth in specific segments: a projected compound annual growth rate above 40 per cent for agentic AI, and around 24 per cent for AIOps between 2025 and 2030.

 

Our DPS position paper situates its approach within this context: an AI market that is expanding quickly, but where many enterprises are still struggling to industrialise and govern what they have already built.

From experimentation to industrialised AI operations

The core of the document is a set of operational priorities that we believe are now essential for any organisation treating AI, as part of its production infrastructure rather than as an isolated innovation stream. These priorities are:

  • Agentic AI for autonomous, policy‑controlled decision‑making in IT environments.
  • AIOps as the backbone of AI‑powered operations: observability, incident management and automation at scale.
  • MLOps and LLMOps to move beyond proofs of concept and manage AI models throughout their lifecycle.
  • AI inference at scale, to run models in real time across cloud, edge and hybrid architectures.
  • European sovereign AI, ensuring digital independence for regulated and public sector clients.
  • GreenOps and FinOps, aligning AI adoption with cost control and carbon footprint reduction.

Each of these pillars is treated as an operational capability, not as a technology experiment. The paper describes how DPS uses agentic AI, for instance, inside existing managed services such as IT service management with ServiceNow and observability with Dynatrace, where autonomous agents handle end‑to‑end incident detection and remediation with significant reductions in mean time to resolution.

AIOps as a structural requirement

On AIOps, the paper takes a clear position: manually managing modern IT estates is no longer viable. Increasingly hybrid and multi‑cloud environments, large volumes of telemetry data and tighter service level expectations all push traditional operations models towards their limits.

The DPS analysis highlights several functions where AIOps is already making a practical difference:

  • Predicting and detecting anomalies before they cause outages.
  • Automating root‑cause analysis and remediation.
  • Reducing alert noise by correlating logs, metrics, traces and events across on‑premises, cloud and edge systems.
  • Providing real‑time dashboards that link capacity planning, cost–performance trade‑offs and service level indicators.

Within DPS, these capabilities are integrated into a broader service catalogue that covers IT service management, AIOps platforms, observability tools and predictive analytics, with the stated objective of moving towards self‑healing, insight‑driven operations.

Scaling AI with MLOps, LLMOps and inference at scale

The position paper then turns to the question of scale. MLOps is described as the operational backbone for traditional machine learning models, while LLMOps extends this logic to large language models used in generative applications.

DPS details how it integrates these practices into its existing DevOps factories, with continuous integration, automated testing, monitoring and retraining loops to keep models aligned with business needs.

Inference at scale is presented as the point where AI delivers tangible value to users. The document explains how techniques such as model quantisation, pruning and knowledge distillation can reduce latency and cost, and how hybrid architectures distribute workloads between cloud data centres and edge locations. It also underlines the need for continuous monitoring, explainability and access control, using established observability stacks and governance frameworks.

For clients, the argument is straightforward: without industrialised MLOps, LLMOps and inference pipelines, AI remains stuck in proofs of concept that are expensive to run and hard to trust.

European sovereign AI, GreenOps and FinOps

One of the distinctive aspects of this paper is its emphasis on European sovereignty. It places the EU AI Act and the GDPR at the centre of the regulatory landscape, noting that high‑risk AI systems will be subject to strict obligations and that non‑compliance can carry significant financial penalties. In this context, DPS highlights our role as a European managed services provider, with sovereign cloud capabilities in several countries and a portfolio tailored to public sector and highly regulated industries.

GreenOps and FinOps are treated as complementary disciplines. The document points out that AI workloads can generate volatile cloud spending patterns and substantial energy consumption, particularly when training or running large models. By combining cost‑optimisation practices with energy‑efficient architectures and renewable‑powered data centres, DPS positions our approach as a way to scale AI while keeping both budgets and emissions under control.

A client journey anchored in advisory, build and run

The position paper concludes with a client journey that runs from technical advisory through design and build to transformation and operations at scale. DPS sets out typical pain points – unclear AI strategy, difficulties integrating AI into existing systems, skills gaps, high project abandonment rates after proof of concept – and explains how our teams address each through road‑mapping, secure architectures, integration factories, training and change management.

Throughout, the emphasis remains on AI as an operational layer on top of cloud and infrastructure services, supported by a partner ecosystem that includes hyperscalers, sovereign cloud providers, software vendors and AI specialists.

The full paper offers a detailed, technical view of this approach, and is aimed at organisations that now need to industrialise AI within their IT operations rather than simply experiment with it.