Successful Performance Testing Begins at Requirements | TechWell

Successful Performance Testing Begins at Requirements

Traditionally, performance testing occurs at or near the end of a project. Attempting to do all that is required in a performance test at the end of the development and functional testing process rarely, if ever, succeeds.

This approach is based on the idea that a complete system or application must exist in order to execute a performance test. While stability of the software is an essential aspect of performance testing, “stability of the software” does not necessarily refer to the entire system.

If problems are discovered that require changes to the system, the change may only require that the performance test be rerun. If the changes require alterations to the functional aspects of the software, it may be necessary to rerun all previous functional tests as well as their respective regression tests. Major changes to the software or system design may also require that the performance test have all its elements be modified or recreated.

Changes to the design or architecture typically require tuning one factor at a time (OFAT). Changing multiple aspects of a system can result in the changes canceling each other out.

OFAT assumes factors are mutually independent. In fact, if there are many complex, poorly understood, and hidden interdependencies, OFAT takes too long. Other methods, such as multifactor-at-once adjustments, are harder to apply and may lead to worse problems.

Most software is not built in a single, massive build; it tends to be built and delivered in increments or stages. Performance testing can also be performed incrementally within this process, utilizing parallel, overlapping feature and performance test activities and doing incremental performance tests as features become available.

Performance testing begins on the first increment once functional testing is complete. The development and functional test team continues creating new features while the first increment is performance tested. This approach works very well in the incremental or agile types of development processes. As each build completes its functional tests, it can then be moved into limited performance testing.

Discovery of performance issues in the early builds allows more time to correct the design. The impact of design changes is lower as fewer functions exist. Using static methods can further enhance the early incremental performance testing approach. Static testing in this instance is often referred to as performance engineering.

This is by far the best idea—applying the concepts of performance analysis to the requirements and design before they are developed. In his book The Art of Application Performance Testing, Ian Molyneaux noted, “If you don’t take performance considerations into account during application design, you are asking for trouble.”

By including critical performance-related features and elements in early builds, we can take advantage of the incremental nature of the development process to avoid creating engineering in potential performance issues.

This requires that we include performance testers as part of the earliest parts of the development process. It also requires that project managers and development managers agree to the process. This may necessitate a slightly slower early deliver process, as some elements that represent performance issues also require more time and resources to develop and consequently test.

Up Next

About the Author

TechWell Insights To Go

(* Required fields)

Get the latest stories delivered to your inbox every week.