<img height="1" width="1" style="display:none;" alt="" src="https://px.ads.linkedin.com/collect/?pid=4958233&amp;fmt=gif">
Article
5 min read
Alex Gatu

From a software delivery life cycle perspective, most projects nowadays are working with agile methodologies – and more particularly, Scrum, Kanban etc. During the agile sprints, development and testing work together to deliver a functional increment of the application, focusing less on the non-functional areas. The most important non-functional quality attributes to be validated are usually performance and security tests.

 

That’s why we have implemented an initiative to bring performance and security testing into the sprint: ‘shifting Left’, which means earlier in the timeline stretching from left to right. The purpose is to increase the testing types covered by the team. Performance and security tests in a sprint should be run at every codebase change to catch any potential issues sooner rather than later.

 

The performance and security testing in the sprint should be run inside the CI/CD pipeline as a last step after the functional testing, giving another set of validations from a non-functional point of view. The alternative is to run it before the release to production.

 

Usually, performance and security testing is left until the end of the project milestone, thus increasing the risk of encountering potential non-functional problems too late. By shifting the performance and security testing left, the project can confidently identify performance and security issues early. This increases the probability of delivering high-quality software when it comes to security and performance.

 

Background and significance

 

We see an increase in the complexity of products not only because of functional complexity but also because of an increased focus on quality attributes like security and performance. Nowadays, products get built incrementally, and it’s impossible to add the quality attributes at the very end. We need to validate continuously throughout the delivery process to ensure these quality attributes are complied with.

 

The people involved in delivering this model are the non-functional testers. They work alongside the rest of the sprint team, as any potential performance and security defects found must be fixed within the sprint. Additionally, during the estimation phase, the performance and security testers consider appropriate estimation for the planning session. This implies that the delivery effort for each feature shall also consider the non-functional effort involved and potential defects that need to be fixed.

 

Non-functional testing in the development process

 

Project aims

 

The main goal is to catch any potential performance and security problems sooner rather than later in the sprint at every codebase change. If these defects are found at the end of the release cycle, the deployment of the application release candidate into production might be in jeopardy. Of course, the challenge is to keep track of the performance and security test results. This can be mitigated by automating the tests and improving communication flows with the other team members.

 

A secondary goal is to increase the performance and security awareness of the rest of the engineering team, as they are usually focused on delivering from a functional point of view. In time, the team would increase its knowledge of security and performance and implement features proactively in alignment with these quality attributes.

 

Toolset and methodology

 

We have used a specific set of tools for this initiative:

 

  • Gatling for writing tests and measuring response times in performance testing
  • A MySQL database for storing the results, using a Python performance module written to extract only the core performance data from the results
  • An Elasticsearch database to index the data based on test names using Logstash
  • Kibana for plotting the results, showing potential performance degradation from run to run compared to a benchmark
  • A Jenkins job to integrate everything in the CI/CD pipeline

 

All required environments are created and destroyed on AWS for every test. The comparison provides the pass/fail result to Jenkins without any manual intervention, thus giving a quick overview of the performance results.

 

Example of a testing setup

 

For security testing, OWASP ZAP is integrated using both a Jenkins plug-in that is inserted in the CI/CD pipeline, and a transparent mode instance is also installed in the environment to scan API requests and identify potential security problems. A transparent mode instance is a proxy that’s placed between a user’s browser and the application on the server to scan the traffic that passes through.

 

Challenges

 

One of the main challenges was to integrate the non-functional test results into the CI/CD reporting due to the fact that there are no simple pass/fail mechanisms for performance and security. For performance, response times must be analysed in addition to resource consumption, error rates etc. For security, false positives must be investigated. One solution is to develop an ‘assessment framework’ that can manage the data from the test results and send a quick pass/fail result to the stakeholders.

 

Another side challenge was that the team’s knowledge about non-functional testing was not homogenous. We addressed this through training sessions explaining performance and security mechanisms, which the team appreciated and considered a success.

 

From a duration point of view, both performance and security tests can take quite some time to run. For instance, one can monitor the performance degradations during a weekend to identify a memory leak that leads to increasing memory consumption. In order to cope with the increasing cost for these kinds of tests – if a cloud-based solution is used like AWS – we could schedule the machines to be destroyed after the tests are finished and the results are saved in a non-volatile environment.

 

Results

 

We discovered potential problems very early. Based on the defect/cost graph, the savings are substantial, varying around several days per release. In addition, due to the immediate feedback, the overall security and performance knowledge inside the sprint team increased, leading to more awareness during the implementation. The team no longer sees the non-functional defects as something that they don’t know how to interpret but they are now aware of how they should start to fix them.

 

Evaluation

 

For the long-term objectives, the ‘performance and security tests shifting left’ initiative has a notable impact on the quality of the delivered software. Previously, such issues were found very late, delaying the release of the product. Now, performance and security issues can be identified during the sprint and fixed sooner rather than later. We evaluated the success based on the adoption rate of engineering teams across projects.

 

A learning on what we could have done differently is the way we integrated the tests in CI/CD, as we did not have a clear plan from the start. We needed to adapt as we progressed with the initiative. But despite the challenges mentioned, the initiative is an ongoing success that helps all stakeholders to have a clearer view of the performance and security quality of an application.

 

In the future, the plan is to improve the model by creating a basic modular framework that could be easily ported to other projects. In doing so, the impact on the team and the estimates would decrease while maintaining the advantages of this approach of shifting non-functional tests left.

 

No video selected

Select a video type in the sidebar.