Saturday, July 27, 2013

Performance Tests actions using Load runner

Performance Tests are tests that determine end to end timing (benchmarking) of various time critical business processes and transactions, while the system is under low load, but with a production sized database. This sets ‘best possible’ performance expectation under a given configuration of infrastructure. It also highlights very early in the testing process if changes need to be made before load testing should be undertaken. For example, a customer search may take 15 seconds in a full sized database if indexes had not been applied correctly, or if an SQL 'hint' was incorporated in a statement that had been optimized with a much smaller database. Such performance testing would highlight such a slow customer search transaction, which could be remediated prior to a full end to end load test.

It is 'best practice' to develop performance tests with an automated tool, such as WinRunner, so that response times from a user perspective can be measured in a repeatable manner with a high degree of precision. The same test scripts can later be re-used in a load test and the results can be compared back to the original performance tests.
Repeatability

A key indicator of the quality of a performance test is repeatability. Re-executing a performance test multiple times should give the same set of results each time. If the results are not the same each time, then the differences in results from one run to the next can not be attributed to changes in the application, configuration or environment.
Performance Tests Precede Load Tests

The best time to execute performance tests is at the earliest opportunity after the content of a detailed load test plan have been determined. Developing performance test scripts at such an early stage provides opportunity to identify and remediate serious performance problems and expectations before load testing commences.

For example, management expectations of response time for a new web system that replaces a block mode terminal application are often articulated as 'sub second'. However, a web system, in a single screen, may perform the business logic of several legacy transactions and may take 2 seconds. Rather than waiting until the end of a load test cycle to inform the stakeholders that the test failed to meet their formally stated expectations, a little education up front may be in order. Performance tests provide a means for this education.

Another key benefit of performance testing early in the load testing process is the opportunity to fix serious performance problems before even commencing load testing.

A common example is one or more missing indexes. When performance testing of a "customer search" screen yields response times of more than ten seconds, there may well be a missing index, or poorly constructed SQL statement. By raising such issues prior to commencing formal load testing, developers and DBAs can check that indexes have been set up properly.

Performance problems that relate to size of data transmissions also surface in performance tests when low bandwidth connections are used. For example, some data, such as images and "terms and conditions" text are not optimized for transmission over slow links.
Pre-requisites for Performance Testing

A performance test is not valid until the data in the system under test is realistic and the software and configuration is production like. The following table list pre-requisites for valid performance testing, along with tests that can be conducted before the pre-requisites are satisfied:
Performance Test
Pre-Requisites
Comment
Caveats on testing where
pre-requisites are not satisfied.
Production Like Environment
Performance tests need to be executed on the same specification equipment as production if the results are to have integrity.
Lightweight transactions that do not require significant processing can be tested, but only substantial deviations from expected transaction response times should be reported.
Low bandwidth performance testing of high bandwidth transactions where communications processing contributes to most of the response time can be tested.
Production Like Configuration
Configuration of each component needs to be production like. 
For example: Database configuration and Operating System Configuration.
While system configuration will have less impact on performance testing than load testing, only substantial deviations from expected transaction response times should be reported.
Production Like Version
The version of software to be tested should closely resemble the version to be used in production.
Only major performance problems such as missing indexes and excessive communications should be reported with a version substantially different from the proposed production version.
Production Like Access
If clients will access the system over a WAN, dial-up modems, DSL, ISDN, etc. then testing should be conducted using each communication access method.
See Network Sensitivity Tests for more information on testing WAN access.
Only tests using production like access are valid.
Production Like Data
All relevant tables in the database need to be populated with a production like quantity with a realistic mix of data.e.g. Having one million customers, 999,997 of which have the name "John Smith" would produce some very unrealistic responses to customer search transactionsLow bandwidth performance testing of high bandwidth transactions where communications processing contributes to most of the response time can be tested.

Documenting Response Time Expectations.

Rather that simply stating that all transactions must be 'sub second', a more comprehensive specification for response time needs to be defined and agreed to be relevant stakeholders.

One suggestion is to state an Average and a 90th Percentile response time for each group of transactions that are time critical. In a set of 100 values that are sorted from best to worst, the 90th percentile simply means the 90th value in the list.

Click on this link for more information on response time definition.
Executing Performance Tests.

Performance testing involves executing the same test case multiple times with data variations for each execution, and then collating response times and computing response time statistics to compare against the formal expectations. Often, performance is different when the data used in the test case is different, as different numbers of rows are processed in the database, different processing and validation come into play, and so on.

By executing a test case many times with different data, a statistical measure of response time can be computed that can be directly compared against a formal stated expectation.

No comments: