<img height="1" width="1" style="display:none;" alt="" src="https://px.ads.linkedin.com/collect/?pid=4958233&amp;fmt=gif">
RSS Feed

Test Automation | Alex Gatu |
15 May 2020

Project summary

From a software delivery life cycle perspective, most of the projects nowadays are working with Agile (and more particularly Scrum, Kanban, etc.). During the Agile sprints, development and testing are working together to deliver a functional increment of the application, focusing less on the non-functional area. The most important non-functional quality attributes to be validated are usually performance and security tests.

For these reasons we have implemented the initiative to bring the performance and security testing into the sprint (‘Shifting Left’, i.e. earlier in the timeline stretching from left to right). The purpose is to increase the testing types covered by the team. Performance and security tests in a sprint should be run at every codebase change to catch any potential issues sooner rather than later.

The performance and security testing in the sprint initiative should be run inside the CI/CD pipeline as a last step after the functional testing, giving another set of validations from a non-functional point of view. The alternative is to run it before the release to production.

Usually, performance and security testing are left to the end of the project milestone, thus increasing the risk of encountering potential non-functional problems too late. By shifting the performance and security testing left, the project can confidently identify performance and security issues early. This increases the probability of delivering high quality software in regard to security and performance.

Background and significance

We see an increase in the complexity of the products not only because of functional complexity, but also because of an increased focus on quality attributes like security and performance. Nowadays products get built incrementally and it is impossible to add the quality attributes at the very end. We need to validate continuously throughout the delivery process to ensure these quality attributes are complied with. The people involved in delivering this model are the non-functional testers. They work alongside the rest of the sprint team as any potential performance and security defects found must be fixed within the sprint. Additionally, during the estimation phase, the performance and security testers consider appropriate estimation for the planning session. This implies that the delivery effort for each feature shall also consider the non-functional effort involved and potential defects that need to be fixed.


Project aims/goals

The main goal is to catch any potential performance and security problems sooner rather than later in the sprint at every code base change. If these defects are found at the end of the release cycle, the deployment of the application release candidate into production might be in jeopardy. Of course, the challenge is to keep track of the performance and security test results. This can be mitigated by automating the tests and improving communication flows with the other team members.

A secondary goal is to increase the performance and security awareness of the rest of the engineering team, as they are usually focused on delivering from a functional point of view. In time, the team would increase its knowledge in security and performance and will implement features proactively considering these quality attributes.

Tool set and methodology

We have used a certain set of tools for the initiative. The tool used for performance testing is Gatling for writing the tests and measuring the response times. The results are stored in a MySQL DB using a Python Performance module written to extract only the core performance data from the results. An ElasticSearch DB is used to index the data based on test names using Logstash. Kibana is used for plotting the results, showing potential performance degradation from run to run compared to a benchmark. Everything is integrated in the CI/CD pipeline using a Jenkins job. All required environments are created and destroyed on AWS for every test. The comparison is providing the pass/fail result to Jenkins without any manual intervention thus giving a quick overview of the performance results.


For security testing, OWASP ZAP is integrated using both a Jenkins plugin that is inserted in the CI/CD pipeline, but also a transparent mode instance is installed in the environment to scan API requests and identify potential security problems. A transparent mode instance is a proxy that is placed between the browser of the user and the application on the server in order to scan the traffic that passes through.

Challenges/obstacles faced

One of the main challenges was to integrate the non-functional test results into the CI/CD reporting due to the fact that there are no simple pass/fail mechanisms for performance and security. For performance, response times must be analysed, in addition to resource consumption, error rates, etc., and for security, false positives must be investigated. A solution is to develop an ‘assessment framework’ that can manage the data from the test results and derive a quick pass/fail result to the stakeholders.

Another side challenge was that the team’s knowledge about non-functional testing was not homogenous. We addressed this through several training sessions where performance and security mechanisms were explained. This was considered a success.

From a duration point of view, both performance and security tests can take quite some time to run. For instance, one can monitor the performance degradations during a weekend to identify a memory leak that leads to increasing memory consumption. In order to cope with the increasing cost for these kinds of tests – if a cloud-based solution is used, like AWS – we could schedule the machines to be destroyed after the tests are finished and the results are saved in a non-volatile environment.


We discovered potential problems very early. Based on the defect/cost graph the savings are substantial, varying around several days per release. In addition, due to the immediate feedback, the overall security and performance knowledge inside the sprint team increased, leading to more awareness during the implementation. The team no longer sees the non-functional defects as something that they do not know how to interpret, but rather they are aware how they should start to fix them.


For the long-term objectives, the ‘performance and security tests shifting left’ initiative has a notable impact on the quality of the delivered software. Previously all the issues were found very late, delaying the release of the product. Now performance and security issues were identified during the sprint and were fixed sooner rather than later. We evaluated the success based on the rate of adoption by the engineering team across projects.

What we could have done differently is the way we integrated the tests in CI/CD as we did not have a clear plan from the start. We needed to adapt as we progressed with the initiative. But despite all of the challenges mentioned, the initiative is an ongoing success that helps all stakeholders to have a clearer view of the performance and security quality of the application.

In the future, the plan is to improve the model by creating a basic modular framework that could be easily ported to other projects. In doing so, the impact on the team and the estimates would decrease while maintaining the advantages of this approach of shifting non-functional tests left.

Alex Gatu

Senior Test Consultant

Alex is a passionate software testing engineer with a background in programming, who has dedicated the past decade to security and performance testing. He is involved in enhancing the technical excellence in Endava and can often be found creating custom performance and security automation frameworks. Outside work, Alex is enthusiastic about DJing (as long as it is a variety of electronic music) and spending time with family and friends.


From This Author

  • 25 February 2020

    Challenges in creating relevant test data without using personally identifiable information



  • 13 November 2023

    Delving Deeper Into Generative AI: Unlocking Benefits and Opportunities

  • 07 November 2023

    Retrieval Augmented Generation: Combining LLMs, Task-chaining and Vector Databases

  • 19 September 2023

    The Rise of Vector Databases

  • 27 July 2023

    Large Language Models Automating the Enterprise – Part 2

  • 20 July 2023

    Large Language Models Automating the Enterprise – Part 1

  • 11 July 2023

    Boost Your Game’s Success with Tools – Part 2

  • 04 July 2023

    Boost Your Game’s Success with Tools – Part 1

  • 01 June 2023

    Challenges for Adopting AI Systems in Software Development

  • 07 March 2023

    Will AI Transform Even The Most Creative Professions?

  • 14 February 2023

    Generative AI: Technology of Tomorrow, Today

  • 25 January 2023

    The Joy and Challenge of being a Video Game Tester

  • 14 November 2022

    Can Software Really Be Green

  • 26 July 2022

    Is Data Mesh Going to Replace Centralised Repositories?

  • 09 June 2022

    A Spatial Analysis of the Covid-19 Infection and Its Determinants

  • 17 May 2022

    An R&D Project on AI in 3D Asset Creation for Games

  • 07 February 2022

    Using Two Cloud Vendors Side by Side – a Survey of Cost and Effort

  • 25 January 2022

    Scalable Microservices Architecture with .NET Made Easy – a Tutorial

  • 04 January 2022

    Create Production-Ready, Automated Deliverables Using a Build Pipeline for Games – Part 2

  • 23 November 2021

    How User Experience Design is Increasing ROI

  • 16 November 2021

    Create Production-Ready, Automated Deliverables Using a Build Pipeline for Games – Part 1

  • 19 October 2021

    A Basic Setup for Mass-Testing a Multiplayer Online Board Game

  • 24 August 2021

    EHR to HL7 FHIR Integration: The Software Developer’s Guide – Part 3

  • 20 July 2021

    EHR to HL7 FHIR Integration: The Software Developer’s Guide – Part 2

  • 29 June 2021

    EHR to HL7 FHIR Integration: The Software Developer’s Guide – Part 1

  • 08 June 2021

    Elasticsearch and Apache Lucene: Fundamentals Behind the Relevance Score

  • 27 May 2021

    Endava at NASA’s 2020 Space Apps Challenge

  • 27 January 2021

    Following the Patterns – The Rise of Neo4j and Graph Databases

  • 12 January 2021

    Data is Everything

  • 05 January 2021

    Distributed Agile – Closing the Gap Between the Product Owner and the Team – Part 3

  • 02 December 2020

    8 Tips for Sharing Technical Knowledge – Part 2

  • 12 November 2020

    8 Tips for Sharing Technical Knowledge – Part 1

  • 30 October 2020

    API Management

  • 22 September 2020

    Distributed Agile – Closing the Gap Between the Product Owner and the Team – Part 2

  • 25 August 2020

    Cloud Maturity Level: IaaS vs PaaS and SaaS – Part 2

  • 18 August 2020

    Cloud Maturity Level: IaaS vs PaaS and SaaS – Part 1

  • 08 July 2020

    A Virtual Hackathon Together with Microsoft

  • 30 June 2020

    Distributed safe PI planning

  • 09 June 2020

    The Twisted Concept of Securing Kubernetes Clusters – Part 2

  • 15 May 2020

    Performance and security testing shifting left

  • 30 April 2020

    AR & ML deployment in the wild – a story about friendly animals

  • 16 April 2020

    Cucumber: Automation Framework or Collaboration Tool?

  • 25 February 2020

    Challenges in creating relevant test data without using personally identifiable information

  • 04 January 2020

    Service Meshes – from Kubernetes service management to universal compute fabric

  • 10 December 2019

    AWS Serverless with Terraform – Best Practices

  • 05 November 2019

    The Twisted Concept of Securing Kubernetes Clusters

  • 01 October 2019

    Cognitive Computing Using Cloud-Based Resources II

  • 17 September 2019

    Cognitive Computing Using Cloud-Based Resources

  • 03 September 2019

    Creating A Visual Culture

  • 20 August 2019

    Extracting Data from Images in Presentations

  • 06 August 2019

    Evaluating the current testing trends

  • 23 July 2019

    11 Things I wish I knew before working with Terraform – part 2

  • 12 July 2019

    The Rising Cost of Poor Software Security

  • 09 July 2019

    Developing your Product Owner mindset

  • 25 June 2019

    11 Things I wish I knew before working with Terraform – part 1

  • 30 May 2019

    Microservices and Serverless Computing

  • 14 May 2019

    Edge Services

  • 30 April 2019

    Kubernetes Design Principles Part 1

  • 09 April 2019

    Keeping Up With The Norm In An Era Of Software Defined Everything

  • 25 February 2019

    Infrastructure as Code with Terraform

  • 11 February 2019

    Distributed Agile – Closing the Gap Between the Product Owner and the Team

  • 28 January 2019

    Internet Scale Architecture