In the previous edition of the series, we have seen how AI transforms the software engineering lifecycle, specifically Management, Requirements, design and development phases. In this edition we will see how subsequent Testing, Deployment and Operations activities are affected by AI.
In order to achieve continuous delivery it is important to match the high velocity of development with high velocity of testing and AI helps achieve that. The main areas where AI currently has a lot of applicability are automated test case creation and maintenance, detecting vulnerabilities, ensuring test coverage, visual anomaly detection and test case prioritization.
Before delving into these use cases in detail, it is worthwhile to go through Facebook example as they have developed an ecosystem of tools for the implementation of full cycle of automatically designing test cases, running them, finding errors and fixing them leveraging their AI enabled testing tool sets. Its Infer tool points at the buggy code through deep code analysis while Sapienz tool automatically generates test sequences and finds the exact points of failure, in effect establishing both cause and effect. Getafix tool is able to suggest fixes found by both Infer and Sapienz and SapFix is able to provide patch for fixes. To be fair, about 75% of the bugs reported by Sapienz need to be fixed and only a small fraction of them are fixed by SapFix, mostly null pointers, but about half of SapFixes are directly accepted once checked by a developer. Facebook has been successfully employing these tools in its platform development and enhancement. They have open sourced Infer, Getafix and Sapienz and plan to opensource SapFix soon.
A vibrant start up ecosystem has come up which leverages AI for automated writing of test cases. They adopt different approaches to test case writing, e.g. from test cases written in natural language, from changes in visual aspects of the application etc. They usually offer a wider range of AI enabled testing capabilities than just automated test case writing. A description of some of these start-ups and their offerings is given below:
- Functionalize’s is a cloud based automated testing technology that is used for functional, performance and load testing. It uses ML to speed up test creation and maintenance. Its Adaptive Language Processing (ALP™) engine is based on reinforcement learning paradigm and allows to upload test plans written in natural language. Their intelligent test agent then converts them into a test scripts. The engine asks questions to confirm its understanding and each time a user answers, it learns more about the specific UI is tested or how the tests are described in general.
- Application visual management is a recent trend where AI is applied for automatically detecting changes in the visual aspect of a web application (also known as “visual anomaly detection”) and then taking the actions such as in automatically creating and maintaining the tests for new or modified aspects of the UI. Applitools is a leading tool for application visual management.
- Mabl is a Software-as-a-Service (SaaS) provider and a unified DevTestOps platform for ML-based test automation. The key features of this solution include auto-healing tests, ML-driven regression testing based on application performance metrics, visual anomaly detection, secure testing, data-driven functional testing, cross-browser testing, test output, integration with popular tools, and much more.
- Testim tries to leverage machine learning to speed up the authoring, execution and most importantly the maintenance of automated tests.
- Test.ai offers AI enabled automated testing platform for mobile apps. Test.ai has trained their bots on tens of thousands of apps to help it understand what an app looks like and how it interfaces with external services. It leverages this learning to produce a test scenario list and leverages bots to test those automatically.
- TestCraft is another AI-powered test automation platform which works on top of Selenium. Testers can visually create automated Selenium based tests using drag and drop interface with no coding skill required.
- ReTest generates and maintains test cases from the project specific semantic representation which mentions list of aspects which need to be tested.
Automated test case writing Besides the test case writing, intelligently determining test coverage is also an area where AI is being applied. Sealights analyzes both the code and the tests that run against it, it gives insights on what part of the code tests are covering and what they're not. This is done, not only for unit tests, but also functional, manual, performance tests. Its quality dashboard can help understand the test coverage for each build and whether it’s improving, decreasing, or has quality holes or not.
There are different approaches for defect prioritization possible, e.g. Validata prioritizes the defects based on the risk that they impose, and so it proposes an order in which they should be addressed. Defect prioritization, like requirement prioritization, is an active research area and should see more start-ups in future.
On Aug 2012, the Knights Capital Group lost $400 mil and went bankrupt in just 45 min after a single failed deployment. While such incidents are catastrophic, waiting for disproportionately long while preparing to have a perfect deployment is also not in the best interest of business. AI enabled deployments can help organization ensure reliable and fast deployments.
AI enabled deployment approach adopted by Sweagle involves learning from status of each prior deployment you made and let AI correlate what bad deployments have in common. AI also look at user feedback data from incidents management systems to enrich its understanding. This learning can be used to automatically correct any wrong configuration data for future deployments.
A San Francisco based start-up, Harness.io, leverages AI for continuous verification of deployments by analyzing business, performance and quality metrics and deciding on automated rollbacks.
Modern cloud systems have a vast number of components that continuously undergo updates and identifying bad rollouts amongst them is challenging. These cloud vendors make use of analytics service for safe deployment in a large-scale system infrastructure. Azure’s Gandalf enables rapid impact assessment of software rollouts to catch bad rollouts before they cause widespread outages. It monitors and analyzes various fault signals and correlate each signal to determine which rollout may have caused the fault signals and decide whether a rollout is safe to proceed or should be stopped.
This is the phase where an application is available to business, need to meet its expectations and evolve with it. This is also the phase with most cost cutting opportunities. The AI enabled tools which help an organization perform application management activities are known as AI-Ops tools. These tools cover varied aspects such as application performance monitoring, log analytics, business activity monitoring, automation platform, incident prediction and resolution, root cause analysis, virtual chat bots etc.
AI-Ops tools typically take a consolidated approach towards monitoring, including application, infrastructure and network level monitoring. They collect the application, infrastructure and network log data, correlate them and have a single view of problem by combining all three. There are many matured AI-Ops vendors in the market, e.g. Dynatrace, Splunk Enterprise, AppDynamics, Instana, Moogsoft, Micro Focus Operations Bridge, Digitate and so on.
Sopra Steria, through its IP’s (Alive Intelligent Platform, Lagoon Datalake, Digital Enablement Platform etc) and its partners, has very strong presence in this area. Axway, a sister company to Sopra Steria with very strong ties, has the market leading offering in this space, called Axway Decision Insights.
AI-led software engineering is a very dynamic area with lot of start-up play, research activities as well as intense activities from established players. As we have seen, AI impacts not just Operations phase (AI-Ops) but also all the other steps in the application life cycle. We believe that although it is still new domain but this will inevitably lead to big transformations in the way we are practicing software engineering.
As this domain is evolving at a very rapid pace any recommendations run the risk of getting obsolete very fast and will need to be revised constantly. Based on the study we have conducted, we have identified use cases that we should prepare to deploy at scale or start experimenting with.
In the next edition we will see some of the products, IP’s and accelerators we have built within Sopra Steria or as part of wider ecosystem.