SLA’s or Service Level Agreements are often a sought after piece of information when gathering requirements before performance testing begins.
The premise is that the SLA constitutes a measurable requirement that can be tested and marked as 'pass' or 'fail' accordingly.
SLA’s take two main forms:
1. System Availability. A typical SLA here could be - 'the application is required to be available 23.5 hours per day every day except on Sundays.'
2. Response time. A typical SLA here could be - 'all user transactions must respond to the user within 2 seconds.'
Performance testers are normally looking towards the 2nd type of SLA, the response time, as the system availability SLA cannot easily be tested.
Often an SLA for response times can be found, usually as a reference in a design document. Caution must be exercised. When an application is designed, high level requirements are captured. A SLA at this stage is not normally a mandatory requirement, merely a guide line, a statement of understanding.
At Testing Performance, we do not treat SLA’s as a measurable requirement. Let's take a typical user journey - it involves:
1. Login
2. Menu selection
3. Completion of form ‘A’
4. Completion of form ‘B’
5. Completion of form ‘C’
6. Submission of all data to be updated onto the database.
While it maybe reasonable for steps 2 to 5 to take only a couple of seconds to respond, the Login and data submission steps will almost certainly take significantly longer than 2 seconds.
By the time the performance tester gets their hands on the application, it is almost certainly too late to take either the login or the data submission steps and rework them so they take less than 2 seconds. In fact designers given the task of reducing the response time would look at taking that single user action and separating it out to 2 or more user actions. This would of course be no quicker to the end user but would meet the response time SLA.
Performance testing to an SLA requirement is really a red-herring. The project is much better off looking at the efficiency of the code and the application, ensuring that the application responds as quickly as possible given the amount of work that each user action is required to do. This can take place in two ways:
1. The performance tester can analyse the response times of user actions at a low workload. Any user action where the response time seems to be higher than expected can be traced, monitored and checked to determine if their are any inefficiencies.
2. As the workload is increased, the performance tester can look to see how the response times of transactions deviate from the baseline as the workload increases
The premise is that the SLA constitutes a measurable requirement that can be tested and marked as 'pass' or 'fail' accordingly.
SLA’s take two main forms:
1. System Availability. A typical SLA here could be - 'the application is required to be available 23.5 hours per day every day except on Sundays.'
2. Response time. A typical SLA here could be - 'all user transactions must respond to the user within 2 seconds.'
Performance testers are normally looking towards the 2nd type of SLA, the response time, as the system availability SLA cannot easily be tested.
Often an SLA for response times can be found, usually as a reference in a design document. Caution must be exercised. When an application is designed, high level requirements are captured. A SLA at this stage is not normally a mandatory requirement, merely a guide line, a statement of understanding.
At Testing Performance, we do not treat SLA’s as a measurable requirement. Let's take a typical user journey - it involves:
1. Login
2. Menu selection
3. Completion of form ‘A’
4. Completion of form ‘B’
5. Completion of form ‘C’
6. Submission of all data to be updated onto the database.
While it maybe reasonable for steps 2 to 5 to take only a couple of seconds to respond, the Login and data submission steps will almost certainly take significantly longer than 2 seconds.
By the time the performance tester gets their hands on the application, it is almost certainly too late to take either the login or the data submission steps and rework them so they take less than 2 seconds. In fact designers given the task of reducing the response time would look at taking that single user action and separating it out to 2 or more user actions. This would of course be no quicker to the end user but would meet the response time SLA.
Performance testing to an SLA requirement is really a red-herring. The project is much better off looking at the efficiency of the code and the application, ensuring that the application responds as quickly as possible given the amount of work that each user action is required to do. This can take place in two ways:
1. The performance tester can analyse the response times of user actions at a low workload. Any user action where the response time seems to be higher than expected can be traced, monitored and checked to determine if their are any inefficiencies.
2. As the workload is increased, the performance tester can look to see how the response times of transactions deviate from the baseline as the workload increases
No comments:
Post a Comment