Machine Learning Changing Software Testing
One of the first ways we’ve seen machine learning (ML) being used in testing is to make the current automated tests more resilient and brittle. One of the Achilles heels of software testing, mainly when you are testing entire applications and user interfaces rather than discrete modules (called unit testing), is maintenance. Software applications are constantly changing as users want additional features or business processes to be updated; however, these changes will cause automated tests to no longer work correctly.
For example, if that login button changes its position, shape, or location, it may break a previously recorded test. Even simple changes like the speed of page loading could fail an automated test. Ironically, humans are much more intuitive and better at testing than computers since we can look at an application and immediately see what button is in the wrong place and that something is not displayed correctly. This is, of course, because most applications are built for humans to use. The parts of software systems built for other computers to use (called APIs) are much easier to test using automation!
To get around these limitations, newer low-code software testing tools are using ML to have the tools scan the applications being tested in multiple ways and over multiple iterations so that they can learn what range of results is “correct” and what range of outcomes is “incorrect.” That means when a change to a system deviates slightly from what was initially recorded, it will be able to automatically determine if that deviation was expected (and the test passed) or unexpected (and the test failed). Of course, we are still in the early stages of these tools, and there has been more hype than substance. Still, as we enter 2023, we’re seeing actual use cases for ML in software testing, particularly for complex business applications and fast-changing cloud-native applications.
One other extensive application for ML techniques will be on the analytics and reporting side of quality engineering. For example, a longstanding challenge in software testing is knowing where to focus testing resources and effort. The emerging discipline of “risk-based testing” aims to focus software testing activities on the areas of the system that contain the most risk. If you can use testing to reduce the overall aggregate risk exposure, you will have a quantitative way to allocate resources. One of the ways to measure risk is to look at the probability and impact of specific events and then use prior data to understand how significant these values are for each part of the system. Then you can target your testing to these areas. This is a near-perfect use case for ML. The models can analyze previous development, testing, and release activities to learn where defects have been found, code has been changed, and problems have historically occurred.