QA
| Gregory Solovey |
06 August 2019
I would like to reflect on three major test trends that were presented in the 2019 edition of the World Quality Report (WQR). According to responses from 1,700 executives from across 10 different sectors and 32 countries, automation of test generation and test execution continued to be ranked highest, and artificial intelligence and machine learning are transformers for software testing. But devil is in details.
1. TEST AUTOMATION - A CRITICAL BOTTLENECK IN THE ADOPTION OF AGILE, DEVOPS, DIGITAL TRANSFORMATION
One of the main findings relates to how low levels of basic test automation have become a critical bottleneck in the growth and adoption of Agile, and DevOps, and the spread of digital transformation for a significant number of the businesses surveyed. The report also points to regulatory and infrastructural challenges like handling test data, provisioning environments, and finding the right talent required for testing.
According to the survey, 99% of respondents said they were using Agile and DevOps in at least some part of their business. Despite this growth in adoption, organisations are still not able to tap the full benefits promised by these approaches, mainly due to low levels of automation and challenges with test data and test environments. The survey also clearly reveals that levels of basic test automation are still quite low (between 14%–18%).
Endava provides a strong response to these challenges. To unlock Agile and DevOps, we utilise a set of test accelerators that enable the rapid creation of test automation environments to help jump start the automated test processes. And rapid means just a few days for automating end-to-end tests and including them in the CI/ CD environment. These testing accelerators include: API testing accelerator, Java testing accelerator, .NET testing accelerator, multi-browser testing accelerator, mobile testing accelerator, security testing accelerator and a performance testing accelerator. This is a unique, Endava-specific approach, something that does not exist in the world test market today.
2. TEST AND AI/ ML ENGAGEMENT. ARE WE THERE YET?
AI and ML are the force multipliers for QA and testing. There will be a transformation of the QA and testing function in the future to support emerging trends such as the Internet of Things (IoT), blockchain and the convergence of analytics, artificial intelligence (AI) and machine learning (ML). When it comes to AI, one must keep two things in mind: AI in testing can refer to the application of AI to quality assurance and testing, as well as to the testing of AI products.
2.1. Testing AI applications.
When asked about the challenges faced in implementing AI projects, 55% of respondents reported that they had “difficulties with identifying where business might actually apply AI.” Many of them first build up their knowledge and expertise in the AI/ ML technology and only then plan to build a business case for it.
From my point of view, there is no specificity in modelling AI compared to non-AI applications. The AI applications can be represented as a set of UML models, and, if so, the same test design methods can be successfully applied.
2.2. Using AI/ ML approaches in testing.
There is enthusiasm for and excitement about AI technologies and solutions, but their actual application in testing is still emerging. The purpose of applying AI to QA and testing is to create a testing architecture that adapts itself automatically to application changes: which tests to run, how many tests to run, assist in the test case creation.
The challenges to implement the above approaches arise from the quality and quantity of the data used: test case coverage, defect data, production data, code coverage, operational logs. Most organizations are still stuck at the level of data analytics rather than using AI technologies such as machine learning, neural networks, fuzzy logic, robotics, or deep learning.
Modern publications propose more approaches to applying AI/ ML to testing. For example:
■ Use image recognition to bring UI testing to the next level. Tools compare the UI elements’ appearance with recognised page images (regardless of their location and presentation).
■ Identify patterns and prevent future defects, via data mining historical log files.
■ Learn an application and automatically generate tests. The tool “explores” an application and, based on the discovered functionalities, automatically generates test cases.
■ Maintain the test code robustness based on historical data. Each UI element uses a few identifiers and even though some of them can be changed, the object will still be recognised by the test tools.
There are quite a few companies that provide AI/ ML applications for test / QA purposes: IBM IGNITE, Appvance IQ, Applitools, Sauce Labs, Testim, Test.AI, Mabl, ReTest, ReportPortal.io. However, the consensus around their efficiency is not very encouraging - they’re not quite there yet. For example, some use machine learning-like (AI-like) technology, or their tools are still in beta version, or would only be commercially available in a few years. Only time will tell if AI/ ML will eventually fulfil the promise of reliable, easy-to-maintain test automation for all.
3. MYTHS AND REALITY OF MODEL-BASED TESTING
Model-based testing is a new automation technique that 61% of respondents foresee using in the coming years. Today’s ever-changing applications, the increased complexity and number of new releases has led to a rise in the importance of model-based testing for automated test cases and automated script generation based on requirements.
Model-based testing is not a new concept — it has existed since the first software bugs. However, the reality of test design automation based on models is not as bright as World Quality Report respondents’ aspirations. The applications’ high complexity levels and the low quality of the existing systems’ structural and behavioural models today do not permit automatically generating tests for production scale systems, and I think this is unlikely happen in the near future.
The recent white paper “Model based test design and automation” presents an Endava approach that can be applied to all our engagements. It proposes to formalise the system description as a hierarchy of UML models, use known test design methods and a multi-level test hierarchy to separate the presentation, logic and data related tests. Thus, even if today's model-based testing does not provide the test design automation, it guarantees the completeness and elegance of a manually built test.
Gregory Solovey
Test Architect
Gregory spent a good part of the last 30 years adapting test design methods, that were implemented 40 years ago, and automating them. Gregory holds an unusual PhD in test design methods, speaks at conferences and writes on test related topics. When he is not architecting test automation and continuous integration solutions, he’s busy traveling, playing tennis and volleyball, and distorting reality in Photoshop.All Categories
Related Articles
-
05 January 2021
Distributed Agile – Closing the Gap Between the Product Owner and the Team – Part 3
-
22 September 2020
Distributed Agile – Closing the Gap Between the Product Owner and the Team – Part 2
-
30 June 2020
Distributed safe PI planning
-
16 April 2020
Cucumber: Automation Framework or Collaboration Tool?
-
03 September 2019
Creating A Visual Culture
-
09 July 2019
Developing your Product Owner mindset
-
11 February 2019
Distributed Agile – Closing the Gap Between the Product Owner and the Team