When generating an automated test script, it is designed in a very specific way. The design needs to incorporate logon, key steps for the business function and logoff.
Logon and logoff may occur just once. A good example is a helpdesk user who is receiving and logging calls throughout the day. They log in when they arrive at the office and use the application throughout the day.
Logon and logoff may need to be simulated for each business function executed. An expenses application is a good example. Once a month, a user connects, enters their expense claim then logs off.
Whether logon & logoff occur once per test execution or once per iteration, the automated test script usually needs to use different authentication details for each logon.
On occasions, there are no restrictions to the number of times a user can log in to an application. Take webmail for instance, usually it is possible to access your mail from more than one workstation at the same time. It is more common that a user can only have a single session with an application. When trying to log in to an application when already connected, the user may be returned a message ‘already logged in’, or may find that the 2nd login works, but the first session is terminated.
The performance tester needs to be able to substitute the recorded details in an automated test script with values from a large list.
User1/password1, User2/password2, User3/password3...
This is a basic requirement in performance test tools. TPNS (now called WSIM or Workload Simulator) was the original performance test tool. That tool used a concept called User tables. The market leading tool, HP’s Loadrunner does more or less exactly the same thing as TPNS calling it instead parameter files. As Loadrunner is the market leading tool, ‘Parameter’ is the accepted term meaning the substitution of generic data.
So how does parameterisation of common data work? The following steps may help:
Determine which data needs to be paramertised.
When entering a postcode or zipcode, does the application determine that the postcode/zipcode is valid? What is the impact to the application of entering the same value for every user for every iteration? This question needs to considered for every piece of entered data in the automated test script.
Be clever, a userID may need to be different each time it is being used, but the password for all users may be the same. Only the userID needs to be parameterised.
Where is the data going to come from?
Some data will be supplied by system administrators such as userids or standing data.
Some data can be found on the internet, for instance, postcode files of variable correctness can sometimes be located.
Other data needs to be created. Before an order can be amended, it must first be created. Your automated test tool can be used to generate this data. On occasions, automated test scripts are created just to build data and are not subsequently used in a load test.
Correlation can be used to reference data which has been returned to the users screen. A good example of this is a function whereby a manager approves an expense claim. The manager searches for any outstanding actions he is required to do. One or more approval requests may be returned. The automated test tool can be told to look for specific information returned to the screen. If an approval request is returned, then code in the automated test script can be built to trap this information to be used in a subsequent action (this is correlation). With the returned data correlated, then the approvals can be actioned.
Extract from the database. Using SQL queries, data can be extracted from the database to build into a parameter file. This is a good method as data on the database is generally accurate. An example here is amending an order. An automated test script has already built some orders, but you don’t know which orders were correctly built, and which ones failed part way through. An SQL query with a properly defined 'where' clause can extract a valid list of data that can be used with confidence.
Logon and logoff may occur just once. A good example is a helpdesk user who is receiving and logging calls throughout the day. They log in when they arrive at the office and use the application throughout the day.
Logon and logoff may need to be simulated for each business function executed. An expenses application is a good example. Once a month, a user connects, enters their expense claim then logs off.
Whether logon & logoff occur once per test execution or once per iteration, the automated test script usually needs to use different authentication details for each logon.
On occasions, there are no restrictions to the number of times a user can log in to an application. Take webmail for instance, usually it is possible to access your mail from more than one workstation at the same time. It is more common that a user can only have a single session with an application. When trying to log in to an application when already connected, the user may be returned a message ‘already logged in’, or may find that the 2nd login works, but the first session is terminated.
The performance tester needs to be able to substitute the recorded details in an automated test script with values from a large list.
User1/password1, User2/password2, User3/password3...
This is a basic requirement in performance test tools. TPNS (now called WSIM or Workload Simulator) was the original performance test tool. That tool used a concept called User tables. The market leading tool, HP’s Loadrunner does more or less exactly the same thing as TPNS calling it instead parameter files. As Loadrunner is the market leading tool, ‘Parameter’ is the accepted term meaning the substitution of generic data.
So how does parameterisation of common data work? The following steps may help:
Determine which data needs to be paramertised.
When entering a postcode or zipcode, does the application determine that the postcode/zipcode is valid? What is the impact to the application of entering the same value for every user for every iteration? This question needs to considered for every piece of entered data in the automated test script.
Be clever, a userID may need to be different each time it is being used, but the password for all users may be the same. Only the userID needs to be parameterised.
Where is the data going to come from?
Some data will be supplied by system administrators such as userids or standing data.
Some data can be found on the internet, for instance, postcode files of variable correctness can sometimes be located.
Other data needs to be created. Before an order can be amended, it must first be created. Your automated test tool can be used to generate this data. On occasions, automated test scripts are created just to build data and are not subsequently used in a load test.
Correlation can be used to reference data which has been returned to the users screen. A good example of this is a function whereby a manager approves an expense claim. The manager searches for any outstanding actions he is required to do. One or more approval requests may be returned. The automated test tool can be told to look for specific information returned to the screen. If an approval request is returned, then code in the automated test script can be built to trap this information to be used in a subsequent action (this is correlation). With the returned data correlated, then the approvals can be actioned.
Extract from the database. Using SQL queries, data can be extracted from the database to build into a parameter file. This is a good method as data on the database is generally accurate. An example here is amending an order. An automated test script has already built some orders, but you don’t know which orders were correctly built, and which ones failed part way through. An SQL query with a properly defined 'where' clause can extract a valid list of data that can be used with confidence.
No comments:
Post a Comment