Application replatforming: the Cloud migration booster

by Benjamin Chossat - Sopra Steria France - Head of Cloud Design
| minute read

Simple set-up, low cost and access to the horizontal elasticity of the Cloud: replatforming is often considered the best solution for porting a business application to the Cloud. All that remains is to decide how to best organise your migration to reduce the application’s dependency on the platform and to maximise the profits from the process.

 

Separating code from its execution environment has not always been the norm when it comes to development, which can cause problems when changing operating systems or migrating towards another infrastructure.

One of the ambitions of the Cloud is to reduce the dependency between the application and the host architecture. It’s this separation that makes designing flexible and scalable infrastructures possible, to which we can add resources, develop components and perform security updates without altering the code or interrupting its smooth running.

Among the different Cloud modernisation strategies, replatforming brings together the set of techniques used to reinstall the application in a new environment by isolating it as much as possible from the aspects linked to configuration and execution stack resilience. The aim, then, is two-fold: to cut the ties with the platform but also with the instance. Here are the main adjustments you will need to consider.

Process automation

Moving to the Cloud means being able to deploy quickly, often and securely. Two strategies can be implemented for this, either separately or together: automation of the installation or configuration process or containerisation.

The aim of robotisation or automation is to replace manual deployment steps with a set of pre-defined and pre-approved processes. The most suitable tool(s) is (are) selected (eg. Ansible, Chef, Terraform) according to the limitations in the production chain and the end environment.

Automation eliminates the risks of human error and speeds up deployment. It reinforces the production process, which can then enter a process of continual delivery with greater ease. It can then benefit from the extra resources gained thanks to the effort saved throughout the validation stages. Implementing an end to end process is also useful for quality control and time to market: tests can be performed sooner and more regularly in an automated chain.

Containerisation makes it possible to design an image that integrates both the application and its execution context. The container is extremely portable, which means it can be deployed on demand and in different environments. The available resources can then be fully optimised if the container approach relies on an end-to-end automated device. In practice, the exact link between containerisation and robotisation depends on the project and the application structure.

Guarantee architecture resilience

Similarly, moving to a Cloud architecture reduces dependence on localised instances: rather than trying to guarantee the smooth running of a component at any cost, a replacement should be considered in the event of a failure. It is no longer a matter of moving towards an ideal scenario of reliability or robustness, but rather of guaranteeing the architecture’s resilience. The application is run several times in parallel across different devices, each of which can process all incoming flows. This high availability is expensive to implement and maintain on specific architectures, but it turns out that it is much easier to access in the Cloud, including for existing applications.

This application involves working with synchronous processing in order to benefit from a multi-instantiated, elastic infrastructure, which can restart tasks itself in the event of a failure. In this context, load distributors will manage task distribution and scalability using context data stored outside the system, in a special NoSQL database for example.

Scheduling and peripheral services

From an architectural standpoint, this decoupling leads us to rethink the management of data related to identity or privacy, as well as the username that can be linked to the work session processed by the database. This integration, performed in JSON Web Token, leads to a redesign of the entire application logic for storing and processing interim results independently from the back-end. In a modern process, the page-by-page path of a workflow sheet is no longer managed by the application server: it is run in the browser, which promotes and simplifies the principle of horizontal scalability.

Modernising also means considering how the architecture will manage asynchronous processes, which makes it possible to create queues and make them easier to operate over time. Most complex business applications already integrate scheduling through a database shared between several instances. The model is preserved and even optimised in a Cloud environment, as the management of instances dedicated to processing can be controlled to fit the size of the queue. The capacity will then be adapted, as needed.

Adjustments are also necessary for batch executions that have to process a large volume of similar data within a defined period of time. Here, we are trying to capitalise on the Cloud’s flexibility, by mobilising the necessary infrastructure only for the strict duration of processing to reduce costs, which means transforming the planning process and implementing a storage service (offline).

This logic can be applied to all peripheral devices that are vital to the application’s operation but do not directly contribute to creating business value. These components must be replaced as soon as possible by managed services in which tasks, updates or backups are automated and robotised. Eliminating these expensive and time-consuming tasks increases the capacity of the specific business process while improving the service quality and overall robustness of the architecture.

Search

cloud

Related content

SNCF propels its digital transformation through massive, industrial and selective Multi-Cloud adoption

In 2016, the SNCF group, which operates in the passenger and freight transport sectors with 275,000 employees, decided to modernise and make its IT assets more agile by launching the Programme Renouveau du Socle Numérique, PRSN, for which e.SNCF is the prime contractor.

Schréder enlightens its entire IT systems with Cloud Services

To enable its move to the Cloud and the management of its whole infrastructure, digital workplace and global cybersecurity, Schréder has once again selected Sopra Steria, its former outsourcer.

Sopra Steria recognised as a Leader in Cloud Infrastructure Brokerage & Orchestration Services by global analyst firm NelsonHall

Sopra Steria helps organisations to build their data, software and tech platforms in order to invigorate the creation of new services.