Here are the Ideal Requirements to be included while developing a Performance test plan.
• Deadlines available to complete performance testing, including the scheduled deployment date.
• Whether to use internal or external resources to perform the tests. This will largely depend on time scales and in-house expertise (or lack thereof).
• Test environment design agreed upon. Remember that the test environment should be as close an approximation of the live environment as you can achieve and will require longer to create than you estimate.
• Ensuring that a code freeze applies to the test environment within each testing cycle.
• Ensuring that the test environment will not be affected by other user activity. Nobody else should be using the test environment while performance test execution is taking place; otherwise, there is a danger that the test execution and results may be compromised.
• All performance targets identified and agreed to by appropriate business stakeholders. This means consensus from all involved and interested parties on the performance targets for the application.
• The key application transactions identified, documented, and ready to script. Remember how vital it is to have correctly identified the key transactions to script. Otherwise, your performance testing is in danger of becoming a wasted exercise.
• Which parts of transactions (such as login or time spent on a search) should be monitored separately. This will be used in Step 3 for “checkpointing.”
• Identify the input, target, and runtime data requirements for the transactions that you select. This critical consideration ensures that the transactions you script run correctly and that the target database is realistically populated in terms of size and content. Data is critical to performance testing. Make sure that you can create enough test data of the correct type within the time frames of your testing project. You may need to look at some form of automated data management, and don’t forget to consider data security and confidentiality.
• Performance tests identified in terms of number, type, transaction content, and virtual user deployment. You should also have decided on the think time, pacing, and injection profile for each test transaction deployment.
• Identify and document server, application server, and network KPIs. Remember that you must monitor the application landscape as comprehensively as possible to ensure that you have the necessary information available to identify and resolve any problems that occur.
• Identify the deliverables from the performance test in terms of a report on the test’s outcome versus the agreed performance targets. It’s a good practice to produce a document template that can be used for this purpose.
• A procedure is defined for submission of performance defects discovered during testing cycles to the development or application vendor. This is an important consideration that is often overlooked. What happens if, despite your best efforts, you find major application, related problems? You need to build contingency into your test plan to accommodate this possibility. There may also be the added complexity of involving offshore resources in the defect submission process. If your plan is to carry out the performance testing in-house then you will also need to address the following points, relating to the testing team.
• Deadlines available to complete performance testing, including the scheduled deployment date.
• Whether to use internal or external resources to perform the tests. This will largely depend on time scales and in-house expertise (or lack thereof).
• Test environment design agreed upon. Remember that the test environment should be as close an approximation of the live environment as you can achieve and will require longer to create than you estimate.
• Ensuring that a code freeze applies to the test environment within each testing cycle.
• Ensuring that the test environment will not be affected by other user activity. Nobody else should be using the test environment while performance test execution is taking place; otherwise, there is a danger that the test execution and results may be compromised.
• All performance targets identified and agreed to by appropriate business stakeholders. This means consensus from all involved and interested parties on the performance targets for the application.
• The key application transactions identified, documented, and ready to script. Remember how vital it is to have correctly identified the key transactions to script. Otherwise, your performance testing is in danger of becoming a wasted exercise.
• Which parts of transactions (such as login or time spent on a search) should be monitored separately. This will be used in Step 3 for “checkpointing.”
• Identify the input, target, and runtime data requirements for the transactions that you select. This critical consideration ensures that the transactions you script run correctly and that the target database is realistically populated in terms of size and content. Data is critical to performance testing. Make sure that you can create enough test data of the correct type within the time frames of your testing project. You may need to look at some form of automated data management, and don’t forget to consider data security and confidentiality.
• Performance tests identified in terms of number, type, transaction content, and virtual user deployment. You should also have decided on the think time, pacing, and injection profile for each test transaction deployment.
• Identify and document server, application server, and network KPIs. Remember that you must monitor the application landscape as comprehensively as possible to ensure that you have the necessary information available to identify and resolve any problems that occur.
• Identify the deliverables from the performance test in terms of a report on the test’s outcome versus the agreed performance targets. It’s a good practice to produce a document template that can be used for this purpose.
• A procedure is defined for submission of performance defects discovered during testing cycles to the development or application vendor. This is an important consideration that is often overlooked. What happens if, despite your best efforts, you find major application, related problems? You need to build contingency into your test plan to accommodate this possibility. There may also be the added complexity of involving offshore resources in the defect submission process. If your plan is to carry out the performance testing in-house then you will also need to address the following points, relating to the testing team.