Toward autonomous performance testing
Mahsid Helali Moghadam in Västerås has worked on novel approaches to performance testing since she started her PhD studies. This short paper explains how we envision future autonomous performance testing to happen. We are working on developing a smart tester agent (using reinforcement learning) to find conditions that stress the system-under-test the most. To set the scene, below follows a general blog post by Saket S. on performance testing.
(Blogged by Saket S.)
Autonomous Performance Testing – Its need, types, and processes
In today’s world, we come across a large number of software-based services, and we notice that the importance of quality of these software-based services is constantly growing. People describe the properties of a cloud-based system in functional as well as non-functional aspects.
Non-functional attributes explain the quality of working elements of the software system and represent the quality features such as performance. Requirements are the primary modes for the address of a software tool’s functional and non-functional aspects. The description of the non-functional needs is done on the base of the software metrics determining the system’s non-functional attributes. The assessment of the satisfaction of non-functional requirements has a pivotal role in the assurance of the users’ quality expectations.
Performance, as a non-functional attribute, explains the functional efficiency of a software tool in terms of several different implementation circumstances, such as the allocation of various resources and different kinds of workload. It is mapped and defined by the use of many indices like throughput, response time, and the utilization of resources. Practitioners use performance modeling as well as performance testing for performance analysis. In general, performance modeling includes the identification of real performance indices and the development of a performance model citing the applicable indexes. Consequently, various model-oriented engineering methods such as model refactoring, model verification, and performance tuning might be done on the base of the performance model.
Performance testing is done to find out whether the implemented software tool works well under the real execution circumstances and caters the performance needs or not. Several techniques and methods have been suggested for the development of software performance models that could offer useful hints on the system’s performance. However, they can’t explain the complete details of the software system. For instance, several details of the implementation environment might be ignored, however, they could have a remarkable influence on the system’s performance.
Why do you need performance testing?
A user wants the application he is using to be friendly to him. He does not want to face any challenges or issues when he starts using the software system. Therefore, performance testing is done to get rid of long load time, reduced response time, bottlenecking, and poor scalability to ensure its user-friendliness.
Types of performance testing
- Load testing – It is done to examine the software’s capability to perform.
- Stress testing – It consists of the examination of the application’s performance amid extreme workloads.
- Endurance testing – It is done to test the software’s workload handling capacity for a more extended period.
- Spike testing – It examines the application reaction to accidental spikes in loads that are generated by users.
- Volume testing – It checks the software’s performance under various database volumes.
- Scalability testing – It determines the effectiveness of a software system.
Process of performance testing
The objective of performance testing would be the same even if the methodology opted for it varies. The testing ensures that your software system caters your expectations and helps you compare the performance of two software tools. Further, it helps you know the parts of your system. Below is the typical process used for performance testing:
- Spot the testing environment – As a tester, you should explore the environment for physical test and production. Further, you should have a clear idea about the availability of testing tools.
- Identify the criteria for performance acceptance – This process consists of the purposes and limitations for throughput, resource allocation, and response times.
- Plan and design performance tests – In this process, decide the differences in usage among end users. Further, find out the critical scenarios to examine all the probable use cases.
- Configure the test environment – Before the execution, prepare the testing atmosphere. Arrange the associated resources and tools for performance testing.
- Implement test design – Build the performance tests as per your design.
- Run the performance testing – Execute and have a close eye on the tests going on.
- Do the analysis, tune, and retest – Analyze the conducted performance test and share the outcomes. Run the examination again if you find that there is a chance of improvement or the performance is not as expected.
In software engineering, performance testing is crucial. Without it, no software product should be marketed. It makes sure that the tool will cater to the user’s need and protect an investor’s investment. Further, performance is required for enhanced customer satisfaction, retention, and loyalty.
M. Helali Moghadam, M. Saadatmand, M. Borg, M. Bohlin, and B. Lisper. Machine Learning to Guide Performance Testing: An Autonomous Test Framework. In Proc. of the International Workshop on Testing Extra-Functional Properties and Quality Characteristics of Software Systems, 2019. (link, preprint)
Satisfying performance requirements is of great importance for performance-critical software systems. Performance analysis to provide an estimation of performance indices and ascertain whether the requirements are met is essential for achieving this target. Model-based analysis as a common approach might provide useful information but inferring a precise performance model is challenging, especially for complex systems. Performance testing is considered as a dynamic approach for doing performance analysis. In this work-in-progress paper, we propose a self-adaptive learning-based test framework which learns how to apply stress testing as one aspect of performance testing on various software systems to find the performance breaking point. It learns the optimal policy of generating stress test cases for different types of software systems, then replays the learned policy to generate the test cases with less required effort. Our study indicates that the proposed learning-based framework could be applied to different types of software systems and guides towards autonomous performance testing.