Wednesday, February 26, 2014

Prepare Jmeter Reports using excel macros | Jmeter report automation using excel macros

To create an Excel macro that reads a JMeter JTL file and prepares reports in the Excel sheet, you can follow these general steps:
  • Open Microsoft Excel and create a new workbook.
  • Press ALT + F11 to open the Visual Basic Editor.
  • In the Visual Basic Editor, insert a new module by right-clicking on the project and selecting "Insert > Module."
  • In the new module, create a new sub-routine that will read the JMeter JTL file. You can use the following code to read the file:
Sub GenerateJMeterReportWithColor()
    
    ' Set the path to the JTL file
    Dim jtlFilePath As String
    jtlFilePath = "C:\Path\To\JMeter.jtl"
    
    ' Open the JTL file
    Dim jtlFile As Object
    Set jtlFile = CreateObject("Scripting.FileSystemObject").OpenTextFile(jtlFilePath, 1)
    
    ' Set up the Excel worksheet
    Dim reportSheet As Worksheet
    Set reportSheet = ThisWorkbook.Sheets.Add(After:=ThisWorkbook.Sheets(ThisWorkbook.Sheets.Count))
    reportSheet.Name = "JMeter Report"
    
    ' Write the report header
    reportSheet.Range("A1:D1").Merge
    reportSheet.Range("A1:D1").Value = "JMeter Report"
    reportSheet.Range("A2:D2").Merge
    reportSheet.Range("A2:D2").Value = "Generated on " & Now()
    reportSheet.Range("A1:D2").HorizontalAlignment = xlCenter
    reportSheet.Range("A1:D2").Font.Bold = True
    reportSheet.Range("A1:D2").Font.Size = 14
    
    ' Write the column headers
    reportSheet.Range("A4").Value = "Sample Name"
    reportSheet.Range("B4").Value = "Samples"
    reportSheet.Range("C4").Value = "Average Response Time (ms)"
    reportSheet.Range("D4").Value = "Error %"
    reportSheet.Range("A4:D4").Font.Bold = True
    
    ' Read the JTL file and populate the report
    Dim line As String
    Dim rowIndex As Integer
    rowIndex = 5
    Do Until jtlFile.AtEndOfStream
        line = jtlFile.ReadLine
        Dim values As Variant
        values = Split(line, ",")
        reportSheet.Cells(rowIndex, 1).Value = values(2)
        reportSheet.Cells(rowIndex, 2).Value = values(1)
        reportSheet.Cells(rowIndex, 3).Value = values(7)
        reportSheet.Cells(rowIndex, 4).Value = values(9)
        If values(9) = "0" Then ' If there are no errors
            reportSheet.Cells(rowIndex, 4).Interior.Color = RGB(198, 239, 206) ' Light green color
        Else
            reportSheet.Cells(rowIndex, 4).Interior.Color = RGB(255, 199, 206) ' Light red color
        End If
        rowIndex = rowIndex + 1
    Loop
    
    ' Format the report
    reportSheet.Columns("A:D").AutoFit
    reportSheet.Range("A5:D" & rowIndex - 1).Borders.LineStyle = xlContinuous
    
    ' Close the JTL file
    jtlFile.Close
    
End Sub

Thats it. Now it can read the JTL file and prepare the reports.

Mobile Performance Testing Checklist


Smartphones & tablets are the reality and a large mass of people are moving to use it for business applications, entertainment, social networking, healthcare applications etc. It’s mandatory for businesses to assess performance of mobile apps before releasing it to public. Responsiveness of mobile apps is one of the big factor for organization to capture the market.

Below are list of things to consider specific to mobile performance testing.
  • Client side considerations 
  • Application performance against different mobile phones in the market? 
  • CPU utilization 
  • Memory utilization 
  • I/O 
  • Cache size availability 
  • Rendering (2D / 3D) 
  • Application performance against different mobile phones & browsers in the market? 
  • JS Engine processing 
  • Number of threads executing the requests 
  • Memory leakage by the application? 
  • Battery consumption by the application? 
  • Internet data usage by the application? 
  • Offline data usage by the application? 
  • Different sizes of images for different mobile phones? 
  • Analyzing waterfall chart for the mobile traffic? 
  • Compressed data 
  • Caching (HTML5 Web Storage) 
  • Less number of round trips 
  • Image size 
  • Consolidating resources? Server side considerations 
  • Analyzing impact on server resources (cpu, memory, i/o) due to slow data transfer rate from server to mobile devices? 
  • Simulating scenarios, when user moving from Wi-fi to 3G, 4G etc? 
  • Simulating scenarios where connections are getting dropped? 
  • Simulating slow sessions and always connected? 
  • Simulating large number of active sessions for a longer time? 
  • Simulating unidirectional server updates? 
  • Simulating large data transfer between server and client? 
  • Static resources being served from CDNs or proxies? 
  • Simulating traffic from different regions / cities? 
  • Simulating different network characteristics? 
  • Latency 
  • Packet Loss 
  • Jitter 
  • Bandwidth (2G, 3G, Wifi, etc.) 
  • Simulating web based application’s scenarios? 
  • Simulating native application’s scenarios? 
  • Simulating hybrid application’s scenarios? 
  • Simulating secure web based application? 
  • Simulating secure native application?

Apache JMeter Tool

The Apache JMeter™ desktop application is open source software, a 100% pure Java application designed to load test functional behavior and measure performance. It was originally designed for testing Web Applications but has since expanded to other test functions.

What can I do with it? 

Apache JMeter may be used to test performance both on static and dynamic resources (Files, Web dynamic languages - PHP, Java, ASP.NET, etc. -, Java Objects, Data Bases and Queries, FTP Servers and more). It can be used to simulate a heavy load on a server, group of servers, network or object to test its strength or to analyze overall performance under different load types. You can use it to make a graphical analysis of performance or to test your server/script/object behavior under heavy concurrent load.
What does it do?

Apache JMeter features include:
  • Ability to load and performance test many different server/protocol types:
  • Web - HTTP, HTTPS
  • SOAP
  • FTP
  • Database via JDBC
  • LDAP
  • Message-oriented middleware (MOM) via JMS
  • Mail - SMTP(S), POP3(S) and IMAP(S)
  • MongoDB (NoSQL)
  • Native commands or shell scripts
  • TCP
  • Complete portability and 100% Java purity .
Full multithreading framework allows concurrent sampling by many threads and simultaneous sampling of different functions by separate thread groups.Careful GUI design allows faster Test Plan building and debugging.
Caching and offline analysis/replaying of test results.

Highly Extensible core:
Pluggable Samplers allow unlimited testing capabilities.Several load statistics may be choosen with pluggable timers .
Data analysis and visualization plugins allow great extensibility as well as personalization.Functions can be used to provide dynamic input to a test or provide data manipulation.Scriptable Samplers (BeanShell, BSF-compatible languages and JSR223-compatible languages)

LOAD DISTRIBUTION IN JMETER TEST PLAN

Distribute concurrent users using following Thread Groups
  1. Ultimate Thread Group
  2. Stepping Thread Group
  3. Throughput Controller
1.) Ultimate Thread Group



infinite number of schedule records
separate ramp-up time, shutdown time, flight time for each schedule record

Let’s configure Ultimate Thread Group as following:



2.) Stepping Thread Group:


preview graph showing estimated load (see example screen below)
initial thread group delay to combine several thread group activities
increase load by portions of threads (users) with ramp-up period
configurable hold time after all threads started
decrease load by portions

Following screenshot shows Stepping Thread Group set to generate load increasing by 10 users each 2 minutes:



3.) Throughput Controller:




The Throughput Controller allows the user to control how often it is executed. There are two modes – percent execution and total executions. Percent executions cause the controller to execute a certain percentage of the iterations through the test plan. Total executions cause the controller to stop executing after a certain number of executions have occurred. Like the Once Only Controller, this setting is reset when a parent Loop Controller restarts.

Load Distribution (Transaction-1)


Load Distribution (Transaction -2)


Load Distribution (Transaction -3)


Load Distribution (Transaction -4)


Load Distribution (Transaction -5)


Sample out

Apache Tomcat Server

Tomcat is an HTTP server. Tomcat is also a servlet container that can execute Java Servlet, and converting JavaServer Pages (JSP) and JavaServerFaces (JSF) to Java Servlet. Tomcat employs a hierarchical and modular architecture as illustrated in the diagram below,

Apache Tomcat is a Java-capable HTTP server, which could execute special Java programs known as Java Servlet and Java Server Pages (JSP). It is the official Reference Implementation (RI) for Java Servlets and JavaServer Pages (JSP) technologies. Tomcat is an open-source project, under the ”Apache Software Foundation” (which also provides the most use, open-source, industrial-strength Apache HTTP Server). The mother site for Tomcat is http://tomcat.apache.org. Alternatively, you can find tomcat via the Apache mother site @ http://www.apache.org.

Tomcat was originally written by James Duncan Davison (then working in Sun), in 1998, based on an earlier Sun’s server called Java Web Server (JWS). It began at version 3.0 after JSWDK 2.1 it replaced. Sun subsequently made Tomcat open-source and gave it to Apache.

Tomcat is an HTTP application runs over TCP/IP. In other words, the Tomcat server runs on a specific TCP port in a specific IP address. The default TCP port number for HTTP protocol is 80, which is used for the productionHTTP server. For test HTTP server, you can choose any unused port number between 1024 and 65535.

Architecture:

Serialization methods used in FLEX Protocol in LoadRunner 11.52

Recording settings

A. Use only a single protocol - FLEX
B. In the recording settings ensure the following options:

1. Use URL based recording mode
2. Under Flex Configuration check the box
– Do not serialize externalizable objects.
3. Set Port mapping as WinInet level.
C. Disable Flex RTMP node (Used during video streaming)

Serialization in Flex scripts
A. UseGraniteDS configuration:

Defines the server side Data Service configuration. If you select this option, do not select Use Flex LCDS/BlazeDS jars to serialize the messages. Ensure that the granit-config.xml file matches the one deployed on the server.

B. Using External Java Serializer:

You can use the Java classes from the Flex server to serialize AMF messages in your script. This process has been simplified

so that you need to include the application JAR files only if the AMF objects implement an externalizable interface.

1. In the Recording Options > Flex > Externalizable Objects node, select Serialize objects using and select Custom Java Classes from the menu.

2. Add the relevant files by using the Add all classes in folder or Add JAR or Zip file buttons. Add the following files:

1. For Adobe BlazeDS or Adobe LCDS, add the following JAR files:

flex-messaging-common.jar

flex-messaging-core.jar

2. Regenerate the script and note any errors. Open the recording options dialog box using the Generation

Options button and add the necessary application JAR files.


3. Ensure that the added files exist in the same location both on the VuGen machine and on all load generators. The Limitations for

the Java Serializer are it Supports JDK versions 1.6 and earlier, Supported servers are Adobe BlazeDS and Adobe Livecycle DS,

Microsoft .NET classesare not supported. During code generation VuGen performs a validity test of the request buffers by

verifying that the buffer can be read and written using the provided jars. Failure in this validity test indicates that

the classes are incompatible with LoadRunner.

C. LoadRunner Serializer:

You can attempt to serialize externalizable objects using the LoadRunner serializer. Ensure that you

have saved all open scripts because this option may result in unexpected errors or invalid steps.

Tuesday, February 25, 2014

Using Vuser Functions in GUI Vuser Scripts

This section lists the Vuser functions that you can use to enhance your GUI user scripts.

declare_rendezvous: Declares a rendezvous.

declare_transaction :Declares a transaction.

end_transaction :Marks the end of a transaction for performance analysis.

error_message: Sends an error message to the Controller.

get_host_name: Returns the name of a load generator.

get_master_host_name :Returns the name of the Controller load generator.

lr_whoami :Returns information about the Vuser executing the script.

output_message: Sends a message to the Controller.

rendezvous: Sets a rendezvous point in a Vuser script.

start_transaction: Marks the beginning of a transaction for performance analysis.

user_data_point: Records a user-defined data sample.

lr_eval_string in load Runner

To correlate statements for protocols that do not have specific functions, you can use the C Vuser
correlation functions. These functions can be used for all C-type Vusers, to save a string to a
parameter and retrieve it when required.

lr_eval_string :Replaces all occurrences of a parameter with its current value.
lr_save_string :Saves a null-terminated string to a parameter.
lr_save_var :Saves a variable length string to a parameter.

Using lr_eval_string:

In the following example, lr_eval_string replaces the parameter row_cnt with its current value. This value is sent to the Output window using lr_output_message.

lrd_stmt(Csr1, "select count(*) from employee", -1, 1 /*Deferred*/,
...);
lrd_bind_col(Csr1, 1, =;COUNT_D1, 0, 0);
lrd_exec(Csr1, 0, 0, 0, 0, 0);
lrd_save_col(Csr1, 1, 1, 0, "row_cnt");
lrd_fetch(Csr1, 1, 1, 0, PrintRow2, 0);
lr_output_message("value: %s" , lr_eval_string("The row count is:
"));

Removing Zero from the Date in VuGen LoadRunner

Enter start date and end date and date should not include zero before actual value of month and day.

for example: Invalid date format: 04/27/2011

                       Valid date format: 4/27/2011

Script:
//variable declaration
char month[10], day[10], year[10], startDate[10], endDate[10];
int nonZeroDay, nonZeroMonth;

/*if you want to run script for multiple interactions then you have to nullify
the concatenated string before starting new iteration otherwise it
will keep the old value*/
strcpy(startDate, "");
strcpy(endDate, "");

//original date which contain zero in the date
lr_save_datetime("Today’s Date is: %m/%d/%Y", DATE_NOW, "normalDate");
lr_output_message(lr_eval_string("{normalDate}"));

//for generating non zero start date

//to assign the current date parts to a parameter
lr_save_datetime("%m", DATE_NOW, "month");
lr_save_datetime("%d", DATE_NOW, "day");
lr_save_datetime("%Y", DATE_NOW, "year");

//to remove zero from the string. e.g.getting '4' from '04'

nonZeroMonth = atoi(lr_eval_string("{month}"));
lr_save_int(nonZeroMonth, "month");

nonZeroDay = atoi(lr_eval_string("{day}"));
lr_save_int(nonZeroDay, "day");

//creating a date string using concatenation of nonzero date parts and separator '/'
strcat(startDate, lr_eval_string("{month}"));
strcat(startDate, "/");
strcat(startDate, lr_eval_string("{day}"));
strcat(startDate, "/");
strcat(startDate, lr_eval_string("{year}"));

lr_save_string(startDate, "p_StartDate");
lr_output_message("Start Date is: %s", lr_eval_string("{p_StartDate}"));

//for generating non zero end date by incrementing date by one day offset
lr_save_datetime("%m", DATE_NOW + ONE_DAY, "month");
lr_save_datetime("%d", DATE_NOW + ONE_DAY, "day");

lr_save_datetime("%Y", DATE_NOW + ONE_DAY, "year");
nonZeroMonth = atoi(lr_eval_string("{month}"));
lr_save_int(nonZeroMonth, "month");

nonZeroDay = atoi(lr_eval_string("{day}"));
lr_save_int(nonZeroDay, "day");

strcat(endDate, lr_eval_string("{month}"));
strcat(endDate, "/");
strcat(endDate, lr_eval_string("{day}"));
strcat(endDate, "/");
strcat(endDate, lr_eval_string("{year}"));

lr_save_string(endDate, "p_endDate");
lr_output_message("End Date is: %s", lr_eval_string("{p_endDate}"));


Output:


Today’s Date is: 04/27/2011
Start Date is: 4/27/2011
End Date is: 4/28/2011

Scripting for Microsoft ASP.NET VIEWSTATE in VuGen LoadRunner

ASP.NET web applications maintain state of data by passing hidden _VIEWSTATE field encoded using base64 algorithm. The state of the page is restored based on the VIEWSTATE. If on the client side, the value of textbox has been changed, VIEWSTATE will now have the new value. (Obviously, this stage does not happen when the page is requested for the first time).

Let’s see how Web Applications developer store objects in VIEWSTATE,

// Storing a Customer object in view state.
Customer cust = new Customer ("Dilip", "Kutarmare");
ViewState["CurrentCustomer"] = cust;

// Retrieve a student from view state.
Customer cust = (Customer) ViewState["CurrentCustomer"];

Now we will see how to capture and replace VIEWSTATE in a recorded script,

1. First set the html parameter length to maximum so that it can store VIEWSTATE, You can adjust this number after examining the captured value.

web_set_max_html_param_len("50000");

2. Capture the VIEWSTATE value when it is returned from the server using web_reg_save_param() function

web_reg_save_param("MyViewState","LB=\"__VIEWSTATE\" value=\"","RB=\"","ORD=ALL",LAST);

3. Substitute the captured value with the parameter name in which you captured the VIEWSTATE:

"Name=__VIEWSTATE", "value={MyViewState}", ENDITEM,

4. Run the script (with enabled advanced log) in VuGen and verify that the correlation function is capturing the right value.

Creating Custom Request in VuGen LoadRunner

There are some situations where you need to create custom request so that you can manipulate the request according to your need. Here we will see how to create custom request and tamper with it to get our job done.

Basic:When we record the script for any web application, we generally choose HTML-based recording because it is easy and minimizes the correlations you have to handle in the script.LoadRunner do generate custom request when some request does not fall under category of lr_web_submit_data() or lr_web_submit_form() requests category.Challenge:Suppose there is one request where you are sending some data items with it and every time there will be random number of data items like, first time there will be four data items second time there will be five data items in the same request. Here you cannot use same recorded request as number of data items are going to be different every time.In below example you can see that there are only four data items in the request next time there may be five data item.
"Name=SearchResultsGrid_ctl04_ViewPackageDialogResult", "Value=291570", ENDITEM,"Name=SearchResultsGrid_ctl08_ViewPackageDialogResult", "Value=291566", ENDITEM,"Name=SearchResultsGrid_ctl11_ViewPackageDialogResult", "Value=291563", ENDITEM,"Name=SearchResultsGrid_ctl12_ViewPackageDialogResult", "Value=291562", ENDITEM,

Solution:Here you can create a custom request and use a parameter which will hold the manipulated string of data items. In that string you can save different number of data items and you can change it as required.Procedure:First record the script with HTML-based recording and with proper transaction names for every action.Save that script and then click on Tools> Regenerate Script…





Below popup will come, click on Options…



It will show you the Regenerate options, Click on General> Recoding



    There are two options available, HTML-based script and URL-based script. Choose URL-based script.
    For URL-based script there is a button for advanced settings named URL Advanced, click on it.
    From the new popup select Use web_custom_request only.
    Save the settings and regenerate the script.
    In the regenerated script you can see that new script is generated which contain only web_custom_request.
    Now find the particular request which you want to manipulate with the help of transaction names which you have given while recording the script.

    Below is the HTML-based recorded request, You can see that there are four data items of grid results,
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
web_submit_data("managepackages.aspx_4",
"Action=http://{p_Url}/site/secure/reportcenter/managepackages.aspx",
"Method=POST",
"TargetFrame=",
"RecContentType=text/html",
"Referer=http://{p_Url}/site/secure/reportcenter/managepackages.aspx",
"Snapshot=t50.inf",
"Mode=HTML",
ITEMDATA,
"Name=SearchResultsGrid_ctl03_ViewPackageDialogResult", "Value={c_PackageID}", ENDITEM,
"Name=SearchResultsGrid_ctl04_ViewPackageDialogResult", "Value=291570", ENDITEM,
"Name=SearchResultsGrid_ctl08_ViewPackageDialogResult", "Value=291566", ENDITEM,
"Name=SearchResultsGrid_ctl11_ViewPackageDialogResult", "Value=291563", ENDITEM,
"Name=SearchResultsGrid_ctl12_ViewPackageDialogResult", "Value=291562", ENDITEM,
LAST);
Below is the custom request which is URL-based,
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
web_custom_request("managepackages.aspx_4",
"URL=http:// {p_Url}/site/secure/reportcenter/managepackages.aspx",
"Method=POST",
"Resource=0",
"RecContentType=text/html",
"Referer= http://{p_Url}/site/secure/reportcenter/managepackages.aspx",
"Snapshot=t135.inf",
"Mode=HTTP",
"Body=SearchResultsGrid_ctl03_ViewPackageDialogResult={c_PackageID}"
"{final_string}",
/*//below string is going to vary every time so we are going to manipulate it every time using string manipulation functions//

"&SearchResultsGrid_ctl04_ViewPackageDialogResult=291570"
"&SearchResultsGrid_ctl08_ViewPackageDialogResult=291566"
"&SearchResultsGrid_ctl11_ViewPackageDialogResult=291563"
"&SearchResultsGrid_ctl12_ViewPackageDialogResult=291562"
*/

LAST);


In custom request you can manipulate final string every time using string manipulation functions available in VuGen.

final_sting= “&SearchResultsGrid_ctl04_ViewPackageDialogResult=291570&SearchResultsGrid_ctl08_ViewPackageDialogResult=291566&SearchResultsGrid_ctl11_ViewPackageDialogResult=291563&SearchResultsGrid_ctl12_ViewPackageDialogResult=291562″

Monday, February 24, 2014

Capturing the response data size limit in JMeter

View Results Tree is very useful listener especially when you are debugging your test plan. You can review the response data and ensure that your test plan works good. But sometimes you you can see a message in the beginning of the response data like "Response too large to be displayed. Size: 313674 > Max: 204800, Start of message:".



And unfortunately you are not able to see the whole response data. By default JMeter shows only first 200 Kb of response data. To resolve this problem you should edit the file jmeter.properties and uncomment the line:

view.results.tree.max_size=0 This will disable response data size check at all.


After you have saved jmeter.properties you will never see this message again.


How to save test results in jmeter


There is one more way about how to do it - using the command line options.

So, if you want to run the test plan from file test.jmx and save the testing results to the file log.jtl, use the following command:

jmeter -n -t test.jmx -l log.jtlYou can configure how the result data will be saved by editing the file jmeter.properties (section "Results file configuration"). For example, if you want to save the results in CSV format you should edit the following property:

jmeter.save.saveservice.output_format=csvThis way is more powerful than the one described in the previous part (Simple Data Writer) because it allows you to set some additional options like CSV delimiter, timestamp format, etc. Sometimes it could be very helpful.

Troubleshooting the Astra Load Test Monitor

How the Monitor Works

The monitor will run the script in the Astra LoadTest (ALT) application. It is running through perfex, and will spawn an mdrv process to execute the script. Perfex is required so that the mdrv process will be created with the credentials supplied in the ALT Recorder page.

The script is run in exactly the same way as it is run under Astra LoadTest in load mode. Astra Load Test will use the credentials defined in the Recorder page.

The Astra LoadTest log files are kept in the SiteScope\cache\tempbysize directory. The monitor parses this log file to get the transaction times and errors/warnings. By default, SiteScope saves only the log files of failed runs. If the run finished successfully, the log is not saved. However If you want to keep all the log files, you have can open the SiteScope/groups/master.config file and set to "true" the following entry:

_astraKeepLogFiles=

How to Troubleshoot

1. Make sure you have Astra LoadTest version 5.4.3 or later. It must be installed on the same machine where SiteScope is running.

2. When setting up the script initially in ALT, you must define at least one Transaction while recording (with Start and Stop points), or the monitor will fail in SiteScope.

3. The credentials (in Recorder page) have to be the same as the user that was logged in when Astra LoadTest was installed.

4. Check that you can run the script in Astra LoadTest directly. If the script runs in Astra Load Test but times out in SiteScope, then use WinInet mode to execute the script instead. In order to switch between WinInet and Turbo replay you have to go to the RTS – Browser Configuration tab and check/uncheck the Use Turboload Technology option.

5. If you are running SiteScope service as a LocalSystem Account, add the user credentials located in the Astra LoadTest Runners Preference page. This is in the help manual at:

http://your SiteScope server name and port/SiteScope/docs/recorderPrefs.htm

SiteScope should run as an interactive service in the context of the LocalSystem account. It can be changed in Control Panel -> Services -> Sitescope -> Properties -> Log on. Select the Allow interaction with desktop checkbox.

6. Every time SiteScope restarts, it cleans up the oldest Astra LoadTest log files in the SiteScope/cache/tempbysize directory according to the max size limit defined for this directory in the SiteScope/groups/master.config file. The entry is:

_tempDirMaxSize=10000 (the size is in KB)

In SiteScope 7.8.1.0, there is no default value for that entry, and that will cause all the files in the directory to be deleted. There is a default of 10000 in SiteScope 7.8.1.2 and later. If the entry is not there, then add the above entry to the master.config file, save it, and restart SiteScope. After that, the log files should be kept until the directory gets to the limit defined.

Note – Astra Load Test has been deprecated as of SiteScope 7.9.0.0 or later. It is only available in these versions if SiteScope has been upgraded from a pre-7.9.0.0 version.

JMETER TIPS AND TRICKS

1. How to add JMeter requests manually and 
2.Need to capture the traffic for a web front end that’s heavy in AJAX. So there’s a lot going on in the background that I don’t get to see by just making a request to the page. I could use a third party app or an addon like Firebug to grab all the traffic but when that functionality is built into JMeter I’d just be making work for myself.

I have the bare bones of my test plan in place. In the test plan node I create a User Defined Variable called serverName with the relevant target server name as the value. I’m ready to record the transactions for a short user story. I create a blank transaction controller to group the page requests (this relates to another tip I’ll post later) then in the Proxy Server config I set the Target Controllerto this empty one.

Changing the proxy server and port in the browser, I point the browser at the required location and record the transaction. For the next step I create another empty Transaction Controller, update theTarget Controller of the Proxy Server, and continue with the user story, repeating this process for each step.

This can seem cumbersome but it lightens the work load. It also allows you to go back and edit the request, adding whatever assertions you feel may be warranted while the step is clerar in your mind. An alternative would be to set the Grouping on the Proxy Server to Put each group in a new controller and later copy and paste these grouped requests into transaction controllers seperately. Either way would work. Just don’t get too far ahead of yourself and record a large slice of your user story without analysing it.

Now if I go back and look at an HTTP request created during the user story, the Server Name or IPfield has been automatically filled with the appropriate variable i.e. ${serverName}. Any other variables I set are also automagically entered into their appropriate fields.

Creating Regular Expressions in JMeter

Creating Regular Expressions

Regular expressions can be a bit daunting at first, so here is a suggested procedure to help create them.

Getting Started
Ensure you have an exact copy of the source document that you want to operate on.

One way to do this is to attach a Save Responses to a file Listener to the Sampler, and run the test to create a copy of the resource.

You can then use the HTTP Sampler with the “file:” protocol to retrieve the page at any time.

Extract the section you are interested in
Find the variable part that you want to extract (in the file, or in the Tree View Listener), and start by using that as the regular expression.

For example, suppose you want to find the value from the following snippet:

input type=”hidden” name=”secret” value=”CAFEBABE.12345(3)”

Start with exactly that as the regular expression, and check that it works, for example in the Tree View Listener Regex tester panel.

If not, examine the expression for any meta-characters (characters that have a special meaning in regexes).In this case, the only special characters are the “.” and the parentheses “(” and “)”.These need to be escaped, by being prefixed with “\”. We now have:

input type=”hidden” name=”secret” value=”CAFEBABE\.12345\(3\)”

This should now match the whole phrase.

The next stage is to tell the regex processor which part of the section you want to use. This is easy, just enclose the characters in parentheses.

So assume you want to match just

CAFEBABE.12345

Your regular expression then becomes:

input type=”hidden” name=”secret” value=”(CAFEBABE\.12345)\(3\)”

Fix the expression so it matches variable text

Of course, the previous expression is not much use, as the text it matches is already known. We want to match variable text, so we have to replace the fixed characters of the target text with meta-character expressions that will match all possible variations of the target.

This requires knowledge (or a good guess) as to what possible characters can be used in the target.

In this case, it looks as if there is a string of hex characters followed by a number, followed by a digit in parentheses.

A digit is easy, that’s “\d”, so a number is “\d+”, where the “+” means one or more of the previous item.

A hex character can be represented by “[0-9A-Za-z]“. The enclosing “[ ]” create a ”character class” which matches one of the specified characters.
The “-” here means a range.

So putting that together, we get:

input type=”hidden” name=”secret” value=”([0-9A-Za-z]+\.\d+)\(d\)”

Now suppose we wanted to match the whole of the value. We could move the closing capture parenthesis to the end of the value.

This would be suitable if there were other values with different patterns that we did not want to match.

However, if we just wanted to capture the quoted value, then we could use:

input type=”hidden” name=”secret” value=”([^"]+)”

The character class in this case is ”[^"]” which means any character except double-quote. The “+” suffix means we want as many as there are in succession.

This will take us up to the end of the value.

If the expression matches more than once in the source, you have a choice:

* specify which match to use
* extend the expression to include more context at the beginning or end so the match is unique

Installing Apache JMeter in Windows XP

Prerequisites for installing Apache JMeter

Apache JMeter is a utility based on Java. We need Java runtime already installed to use Apache JMeter. After confirming that our Windows XP system has proper Java runtime installed we can proceed for installing Apache JMeter.

For checking whether we have Java runtime installed we can follow one of the methods given here. The second method given on that page is the easiest and that is opening a command prompt and typing the command:

java -version

If the command works Java is installed and you will also know the version of Java.

Apache JMeter runs on a fully compliant JVM 1.4 or higher. (It is found that some early versions of Java 1.5 below update 7 do not recognize some JVM switches and hence the jmeter.bat script file needs some changes to run JMeter, described at the end of post).

Steps for installing Apache JMeter


The Apache JMeter Home page contains links for downloads. When we visit Apache JMeter home page the Apache Jakarta project symbol of a bird feather can be seen with introduction to JMeter.






As shown in the image below we have to select the Download Releases link from the home page.



The download page presents many options for download. Usually the suggested mirror is the best mirror but we can choose another one if the suggested mirror gives error or seems slow. For just using the tool we need only the binary release. The screen below shows the version current at the time of writing this article. The TGZ version of the binary is relatively smallest in size. Click on that link and save the download when prompted by the browser.

Alternatively you can click on the ZIP version given below and use any standard UNZIP utility to extract the files.



I have saved the TGZ file and extracted the contents by using 7Zip utility for windows. After extracting the TGZ file we get a folder named jakarta-jmeter-n.n.n, where n.n.n is the version number which we downloaded.

The executable script for Apache JMeter is located in the bin directory.



The screen below shows all the contents of the jmeter bin folder.

Starting Apache JMeter tool


The executable script for Windows platform is jmeter.bat for Linux systems it will be jmeter.sh
These scripts are used to start JMeter in GUI mode. Let us double click the jmeter.bat script to start the tool.



Double clicking the jmeter.bat file will start one command prompt and the JMeter utility in GUI mode, as shown below. The command prompt is tied with the GUI and hence cannot be closed. If the command prompt is closed the GUI will terminate. We can keep the command prompt minimized while working with JMeter GUI. The command prompt is useful in viewing any JMeter exceptions that may occur.



We saw how to download, install and start the Apache JMeter utility.

NOTE: Although Apache JMeter can run on any fully compliant Java version above Java 1.4 (It is found that some early versions of Java 1.5 below update 7 do not recognize some JVM switches and hence the jmeter.bat script file needs some changes to run JMeter. If you happen to have early Java 1.5 version below update 7 then you may get error when double clicking the jmeter.bat file. The error can be fixed by commenting line "set DUMP=-XX:+HeapDumpOnOutOfMemoryError" in the jmeter.bat file. Open the jmeter.bat file by right clicking and choosing Edit option. add REM before that line and you are ready to go.

Reference:

1) http://jakarta.apache.org/jmeter/usermanual/get-started.html#install

Thursday, February 20, 2014

How to run the JMeter for more than 300 users?

Run in Non GUI distributed mode to run for more than 300 users. It depends on RAM size. For the system with 3.2 GB, you can run up-to 1000 users.

Step1:
Configure Jmeter for Distributed testing

Step2:
Place your script and csv in /bin folder of the jmeter.

Step3:
Open command prompt , go to jmeter bin directory and give the following command

jmeter -n -t testplan.jmx -R ip1,ip2 -l resultfilename

How to handle sessions in JMeter?

To handle the sessions in JMeter you need to do the following steps:

Add->Pre-Processors->HTTP URL Re-writting Modifier for the sampler where your session variable is present

Then add your variable in session argument name and check chache session id.



Software Testing terms A-Z

Acceptance Test. Formal tests (often performed by a customer) to determine whether or not a system has satisfied predetermined acceptance criteria. These tests are often used to enable the customer (either internal or external) to determine whether or not to accept a system.

Accessibility testing. Testing that determines if software will be usable by people with disabilities.

Ad Hoc Testing. Testing carried out using no recognised test case design technique.

Algorithm verification testing. A software development and test phase focused on the validation and tuning of key algorithms using an iterative experimentation process.

Alpha Testing. Testing of a software product or system conducted at the developer's site by the customer.

Aperiodic bug. A transient bug that becomes active periodically (sometimes referred to as an intermittent bug). Because of their short duration, transient faults are often detected through the anomalies that result from their propagation.


Artistic testing. Also known as Exploratory testing.

Assertion Testing. (NBS) A dynamic analysis technique which inserts assertions about the relationship between program variables into the program code. The truth of the assertions is determined as the program executes.


Automated Testing. Software testing which is assisted with software technology that does not require operator (tester) input, analysis, or evaluation.


Audit.
  (1) An independent examination of a work product or set of work products to assess compliance with specifications, standards, contractual agreements, or other criteria. (IEEE)

  (2) To conduct an independent review and examination of system records and activities in order to test the adequacy and effectiveness of data security and data integrity procedures, to ensure compliance with established policy and operational procedures, and to recommend any necessary changes. (ANSI)

ABEND Abnormal END. A mainframe term for a program crash. It is always associated with a failure code, known as an ABEND code.
Background testing. Is the execution of normal functional testing while the SUT is exercised by a realistic work load. This work load is being processed "in the background" as far as the functional testing is concerned.

Bandwidth testing. Testing a site with a variety of link speeds, both fast (internally connected LAN) and slow (externally, through a proxy or firewall, and over a modem); sometimes called slow link testing if the organization typically tests with a faster link internally (in that case, they are doing a specific pass for the slower line speed only).

Basis path testing. Identifying tests based on flow and paths of the program or system.

Basis test set. A set of test cases derived from the code logic which ensure that 100\% branch coverage is achieved.


Bug: glitch, error, goof, slip, fault, blunder, boner, howler, oversight, botch, delusion, elision.

Beta Testing. Testing conducted at one or more customer sites by the end-user of a delivered software product or system.

Benchmarks Programs that provide performance comparison for software, hardware, and systems.

Benchmarking is specific type of performance test with the purpose of determining performance baselines for comparison.

Big-bang testing. Integration testing where no incremental testing takes place prior to all the system's components being combined to form the system.

Black box testing. A testing method where the application under test is viewed as a black box and the internal behavior of the program is completely ignored. Testing occurs based upon the external specifications. Also known as behavioral testing, since only the external behaviors of the program are evaluated and analyzed.

Blink testing. What you do in blink testing is plunge yourself into an ocean of data-- far too much data to comprehend. And then you comprehend it. Don't know how to do that? Yes you do. But you may not realize that you know how.

Bottom-up Testing. An approach to integration testing where the lowest level components are tested first, then used to facilitate the testing of higher level components. The process is repeated until the component at the top of the hierarchy is tested.

Boundary Value Analysis (BVA). BVA is different from equivalence partitioning in that it focuses on "corner cases" or values that are usually out of range as defined by the specification. This means that if function expects all values in range of negative 100 to positive 1000, test inputs would include negative 101 and positive 1001. BVA attempts to derive the value often used as a technique for stress, load or volume testing. This type of validation is usually performed after positive functional validation has completed (successfully) using requirements specifications and user documentation.

Branch Coverage Testing. - Verify each branch has true and false outcomes at least once.

Breadth test. - A test suite that exercises the full scope of a system from a top-down perspective, but does not test any aspect in detail

BRS - Business Requirement Specification
Capability Maturity Model (CMM). - A description of the stages through which software organizations evolve as they define, implement, measure, control and improve their software processes. The model is a guide for selecting the process improvement strategies by facilitating the determination of current process capabilities and identification of the issues most critical to software quality and process improvement.

Capture-replay tools. - Tools that gives testers the ability to move some GUI testing away from manual execution by �capturing� mouse clicks and keyboard strokes into scripts, and then �replaying� that script to re-create the same sequence of inputs and responses on subsequent test.

Cause Effect Graphing. (1) [NBS] Test data selection technique. The input and output domains are partitioned into classes and analysis is performed to determine which input classes cause which effect. A minimal set of inputs is chosen which will cover the entire effect set. (2)A systematic method of generating test cases representing combinations of conditions. See: testing, functional.

Clean test. A test whose primary purpose is validation; that is, tests designed to demonstrate the software`s correct working.

Clear-box testing. See White-box testing.

Code audit. An independent review of source code by a person, team, or tool to verify compliance with software design documentation and programming standards. Correctness and efficiency may also be evaluated. (IEEE)

Code Inspection. A manual [formal] testing [error detection] technique where the programmer reads source code, statement by statement, to a group who ask questions analyzing the program logic, analyzing the code with respect to a checklist of historically common programming errors, and analyzing its compliance with coding standards. Contrast with code audit, code review, code walkthrough. This technique can also be applied to other software and configuration items. [G.Myers/NBS] Syn: Fagan Inspection

Code Walkthrough. A manual testing [error detection] technique where program [source code] logic [structure] is traced manually [mentally] by a group with a small set of test cases, while the state of program variables is manually monitored, to analyze the programmer's logic and assumptions.[G.Myers/NBS]

Coexistence Testing. Coexistence isn't enough. It also depends on load order, how virtual space is mapped at the moment, hardware and software configurations, and the history of what took place hours or days before. It�s probably an exponentially hard problem rather than a square-law problem. [from Quality Is Not The Goal. By Boris Beizer, Ph. D.]

Comparison testing. Comparing software strengths and weaknesses to competing products

Compatibility bug A revision to the framework breaks a previously working feature: a new feature is inconsistent with an old feature, or a new feature breaks an unchanged application rebuilt with the new framework code.

Compatibility Testing. The process of determining the ability of two or more systems to exchange information. In a situation where the developed software replaces an already working program, an investigation should be conducted to assess possible comparability problems between the new software and other programs or systems.

Composability testing -testing the ability of the interface to let users do more complex tasks by combining different sequences of simpler, easy-to-learn tasks. [Timothy Dyck, 'Easy' and other lies, eWEEK April 28, 2003]

Condition Coverage. A test coverage criteria requiring enough test cases such that each condition in a decision takes on all possible outcomes at least once, and each point of entry to a program or subroutine is invoked at least once. Contrast with branch coverage, decision coverage, multiple condition coverage, path coverage, statement coverage.[G.Myers]

Configuration. The functional and/or physical characteristics of hardware/software as set forth in technical documentation and achieved in a product. (MIL-STD-973)

Configuration control. An element of configuration management, consisting of the evaluation, coordination, approval or disapproval, and implementation of changes to configuration items after formal establishment of their configuration identification. (IEEE)

Conformance directed testing. Testing that seeks to establish conformance to requirements or specification. [R. V. Binder, 1999]

Cookbook scenario. A test scenario description that provides complete, step-by-step details about how the scenario should be performed. It leaves nothing to change. [Scott Loveland, 2005]

Coverage analysis. Determining and assessing measures associated with the invocation of program structural elements to determine the adequacy of a test run. Coverage analysis is useful when attempting to execute each statement, branch, path, or iterative structure in a program. Tools that capture this data and provide reports summarizing relevant information have this feature. (NIST)

CRUD Testing. Build CRUD matrix and test all object creation, reads, updates, and deletion. [William E. Lewis, 2000]

Data-Driven testing. An automation approach in which the navigation and functionality of the test script is directed through external data; this approach separates test and control data from the test script. [Daniel J. Mosley, 2002]

Data flow testing. Testing in which test cases are designed based on variable usage within the code.[BCS]

Database testing. Check the integrity of database field values. [William E. Lewis, 2000]

Defect. The difference between the functional specification (including user documentation) and actual program text (source code and data). Often reported as problem and stored in defect-tracking and problem-management system

Defect. Also called a fault or a bug, a defect is an incorrect part of code that is caused by an error. An error of commission causes a defect of wrong or extra code. An error of omission results in a defect of missing code. A defect may cause one or more failures.[Robert M. Poston, 1996.]

Defect. A flaw in the software with potential to cause a failure.. [Systematic Software Testing by Rick D. Craig and Stefan P. Jaskiel 2002]

Defect Age. A measurement that describes the period of time from the introduction of a defect until its discovery. . [Systematic Software Testing by Rick D. Craig and Stefan P. Jaskiel 2002]

Defect Density. A metric that compares the number of defects to a measure of size (e.g., defects per KLOC). Often used as a measure of defect quality. [Systematic Software Testing by Rick D. Craig and Stefan P. Jaskiel 2002]

Defect Discovery Rate. A metric describing the number of defects discovered over a specified period of time, usually displayed in graphical form. [Systematic Software Testing by Rick D. Craig and Stefan P. Jaskiel 2002]

Defect Removal Efficiency (DRE). A measure of the number of defects discovered in an activity versus the number that could have been found. Often used as a measure of test effectiveness. [Systematic Software Testing by Rick D. Craig and Stefan P. Jaskiel 2002]

Defect Seeding. The process of intentionally adding known defects to those already in a computer program for the purpose of monitoring the rate of detection and removal, and estimating the number of defects still remaining. Also called Error Seeding. [Systematic Software Testing by Rick D. Craig and Stefan P. Jaskiel 2002]

Defect Masked. An existing defect that hasn't yet caused a failure because another defect has prevented that part of the code from being executed. [Systematic Software Testing by Rick D. Craig and Stefan P. Jaskiel 2002]

Depth test. A test case, that exercises some part of a system to a significant level of detail. [Dorothy Graham, 1999]

Decision Coverage. A test coverage criteria requiring enough test cases such that each decision has a true and false result at least once, and that each statement is executed at least once. Syn: branch coverage. Contrast with condition coverage, multiple condition coverage, path coverage, statement coverage.[G.Myers]

Design-based testing. Designing tests based on objectives derived from the architectural or detail design of the software (e.g., tests that execute specific invocation paths or probe the worst case behaviour of algorithms). [BCS

Dirty testing Negative testing. [Beizer]

Dynamic testing. Testing, based on specific test cases, by execution of the test object or running programs [Tim Koomen, 1999]
End-to-End testing. Similar to system testing; the 'macro' end of the test scale; involves testing of a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.

Equivalence Partitioning: An approach where classes of inputs are categorized for product or function validation. This usually does not include combinations of input, but rather a single state value based by class. For example, with a given function there may be several classes of input that may be used for positive testing. If function expects an integer and receives an integer as input, this would be considered as positive test assertion. On the other hand, if a character or any other input class other than integer is provided, this would be considered a negative test assertion or condition.

Error: An error is a mistake of commission or omission that a person makes. An error causes a defect. In software development one error may cause one or more defects in requirements, designs, programs, or tests.[Robert M. Poston, 1996.]

Errors: The amount by which a result is incorrect. Mistakes are usually a result of a human action. Human mistakes (errors) often result in faults contained in the source code, specification, documentation, or other product deliverable. Once a fault is encountered, the end result will be a program failure. The failure usually has some margin of error, either high, medium, or low.

Error Guessing: Another common approach to black-box validation. Black-box testing is when everything else other than the source code may be used for testing. This is the most common approach to testing. Error guessing is when random inputs or conditions are used for testing. Random in this case includes a value either produced by a computerized random number generator, or an ad hoc value or test conditions provided by engineer.

Error guessing. A test case design technique where the experience of the tester is used to postulate what faults exist, and to design tests specially to expose them [from BS7925-1]

Error seeding. The purposeful introduction of faults into a program to test effectiveness of a test suite or other quality assurance program. [R. V. Binder, 1999]

Exception Testing. Identify error messages and exception handling processes an conditions that trigger them. [William E. Lewis, 2000]

Exhaustive Testing.(NBS) Executing the program with all possible combinations of values for program variables. Feasible only for small, simple programs.

Exploratory Testing: An interactive process of concurrent product exploration, test design, and test execution. The heart of exploratory testing can be stated simply: The outcome of this test influences the design of the next test. [James Bach]
Failure: A failure is a deviation from expectations exhibited by software and observed as a set of symptoms by a tester or user. A failure is caused by one or more defects. The Causal Trail. A person makes an error that causes a defect that causes a failure.[Robert M. Poston, 1996]

Fix testing. Rerunning of a test that previously found the bug in order to see if a supplied fix works. [Scott Loveland, 2005]

Follow-up testing, we vary a test that yielded a less-thanspectacular failure. We vary the operation, data, or environment, asking whether the underlying fault in the code can yield a more serious failure or a failure under a broader range of circumstances.[Measuring the Effectiveness of Software Testers,Cem Kaner, STAR East 2003]

Formal Testing. (IEEE) Testing conducted in accordance with test plans and procedures that have been reviewed and approved by a customer, user, or designated level of management. Antonym: informal testing.

Framework scenario. A test scenario definition that provides only enough high-level information to remind the tester of everything that needs to be covered for that scenario. The description captures the activity̢۪s essence, but trusts the tester to work through the specific steps required.[Scott Loveland, 2005]

Free Form Testing. Ad hoc or brainstorming using intuition to define test cases. [William E. Lewis, 2000]

Functional Decomposition Approach. An automation method in which the test cases are reduced to

fundamental tasks, navigation, functional tests, data verification, and return navigation; also known as Framework Driven Approach. [Daniel J. Mosley, 2002]

Functional testing Application of test data derived from the specified functional requirements without regard to the final program structure. Also known as black-box testing.

Function verification test (FVT). Testing of a complete, yet containable functional area or component within the overall software package. Normally occurs immediately after Unit test. Also known as Integration test. [Scott Loveland, 2005]
Gray box testing. Tests involving inputs and outputs, but test design is educated by information about the code or the program operation of a kind that would normally be out of scope of view of the tester.[Cem Kaner]
Gray box testing. Test designed based on the knowledge of algorithm, internal states, architectures, or other high -level descriptions of the program behavior. [Doug Hoffman]
 
Gray box testing. Examines the activity of back-end components during test case execution. Two types of problems that can be encountered during gray-box testing are: 
  •  A component encounters a failure of some kind, causing the operation to be aborted. The user interface will typically indicate that an error has occurred.
  • The test executes in full, but the content of the results is incorrect. Somewhere in the system, a component processed data incorrectly, causing the error in the results.
  • [Elfriede Dustin. "Quality Web Systems: Performance, Security & Usability."] 
Grooved Tests. Tests that simply repeat the same activity against a target product from cycle to cycle. [Scott Loveland, 2005]
 
 Heuristic Testing: An approach to test design that employs heuristics to enable rapid development of test cases.[James Bach]
 
 
High-level tests. These tests involve testing whole, complete products [Kit, 1995]
 
HTML validation testing. Specific to Web testing. This certifies that the HTML meets specifications and internal coding standards.
 
W3C Markup Validation Service, a free service that checks Web documents in formats like HTML and XHTML for conformance to W3C Recommendations and other standards.
 
Incremental integration testing. Incremental integration testing - continuous testing of an application as new functionality is added; requires that various aspects of an application's functionality be independent enough to work separately before all parts of the program are completed, or that test drivers be developed as needed; done by programmers or by testers.

Inspection. A formal evaluation technique in which software requirements, design, or code are examined in detail by person or group other than the author to detect faults, violations of development standards, and other problems [IEEE94]. A quality improvement process for written material that consists of two dominant components: product (document) improvement and process improvement (document production and inspection).

Integration. The process of combining software components or hardware components or both into overall system.

Integration testing - testing of combined parts of an application to determine if they function together correctly. The 'parts' can be code modules, individual applications, client and server applications on a network, etc. This type of testing is especially relevant to client/server and distributed systems.

Integration Testing. Testing conducted after unit and feature testing. The intent is to expose faults in the interactions between software modules and functions. Either top-down or bottom-up approaches can be used. A bottom-up method is preferred, since it leads to earlier unit testing (step-level integration) This method is contrary to the big-bang approach where all source modules are combined and tested in one step. The big-bang approach to integration should be discouraged.
Interface Tests. Programs that probide test facilities for external interfaces and function calls. Simulation is often used to test external interfaces that currently may not be available for testing or are difficult to control. For example, hardware resources such as hard disks and memory may be difficult to control. Therefore, simulation can provide the characteristics or behaviors for specific function.
Internationalization testing (I18N) - testing related to handling foreign text and data within the program. This would include sorting, importing and exporting test and data, correct handling of currency and date and time formats, string parsing, upper and lower case handling and so forth. [Clinton De Young, 2003].

Interoperability Testing which measures the ability of your software to communicate across the network on multiple machines from multiple vendors each of whom may have interpreted a design specification critical to your success differently.

Inter-operability Testing. True inter-operability testing concerns testing for unforeseen interactions with

other packages with which your software has no direct connection. In some quarters, inter-operability testing labor equals all other testing combined. This is the kind of testing that I say shouldn’t be done because it can�t be done.[from Quality Is Not The Goal. By Boris Beizer, Ph. D.]

Inspection. A formal evaluation technique in which software requirements, design, or code are examined in detail by person or group other than the author to detect faults, violations of development standards, and other problems [IEEE94].

Install/uninstall testing. Testing of full, partial, or upgrade install/uninstall processes.
Key Word-Driven Testing. The approach developed by Carl Nagle of the SAS Institute that is offered as freeware on the Web; Key Word-Driven Test. ing is an enhancement to the data-driven methodology. [Daniel J. Mosley, 2002]

Latent bug A bug that has been dormant (unobserved) in two or more releases. [R. V. Binder, 1999]

Lateral testing. A test design technique based on lateral thinking principals, to identify faults. [Dorothy Graham, 1999]

Limits testing. See Boundary Condition testing.

Load testing. Testing an application under heavy loads, such as testing of a web site under a range of loads to determine at what point the system's response time degrades or fails.

Load stress test. A test is design to determine how heavy a load the application can handle.

Load-stability test. Test design to determine whether a Web application will remain serviceable over extended time span.

Load isolation test. The workload for this type of test is designed to contain only the subset of test cases that caused the problem in previous testing.

Longevity testing. See Reliability testing.

Long-haul Testing. See Reliability testing.
Master Test Planning. An activity undertaken to orchestrate the testing effort across levels and organizations.[Systematic Software Testing by Rick D. Craig and Stefan P. Jaskiel 2002]

Memory leak testing. Testing the server components to see if memory is not properly referenced and released, which can lead to instability and the product's crashing.

Model-Based Testing. Model-based testing takes the application and models it so that each state of each input, output, form, and function is represented. Since this is based on detailing the various states of objects and data, this type of testing is very similar to charting out states. Many times a tool is used to automatically go through all the states in the model and try different inputs in each to ensure that they all interact correctly.[Lydia Ash, 2003]

Monkey Testing. (smart monkey testing) Input are generated from probability distributions that reflect actual expected usage statistics -- e.g., from user profiles. There are different levels of IQ in smart monkey testing. In the simplest, each input is considered independent of the other inputs. That is, a given test requires an input vector with five components. In low IQ testing, these would be generated independently. In high IQ monkey testing, the correlation (e.g., the covariance) between these input distribution is taken into account. In all branches of smart monkey testing, the input is considered as a single event.[Visual Test 6 Bible by Thomas R. Arnold, 1998 ]

Monkey Testing. (brilliant monkey testing) The inputs are created from a stochastic regular expression or stochastic finite-state machine model of user behavior. That is, not only are the values determined by probability distributions, but the sequence of values and the sequence of states in which the input provider goes is driven by specified probabilities.[Visual Test 6 Bible by Thomas R. Arnold, 1998 ]

Monkey Testing. (dumb-monkey testing)Inputs are generated from a uniform probability distribution without regard to the actual usage statistics.[Visual Test 6 Bible by Thomas R. Arnold, 1998 ]

Maximum Simultaneous Connection testing. This is a test performed to determine the number of connections which the firewall or Web server is capable of handling.

Migration Testing. Testing to see if the customer will be able to transition smoothly from a prior version of the software to a new one. [Scott Loveland, 2005]

Mutation testing. A testing strategy where small variations to a program are inserted (a mutant), followed by execution of an existing test suite. If the test suite detects the mutant, the mutant is 'retired.' If undetected, the test suite must be revised. [R. V. Binder, 1999]

Multiple Condition Coverage. A test coverage criteria which requires enough test cases such that all possible combinations of condition outcomes in each decision, and all points of entry, are invoked at least once.[G.Myers] Contrast with branch coverage, condition coverage, decision coverage, path coverage, statement coverage.
Negative test. A test whose primary purpose is falsification; that is tests designed to break the software[B.Beizer1995]

Noncritical code analysis. Examines software elements that are not designated safety-critical and ensures that these elements do not cause a hazard. (IEEE)
Orthogonal array testing: Technique can be used to reduce the number of combination and provide maximum coverage with a minimum number of TC.Pay attention to the fact that it is an old and proven technique. The OAT was introduced for the first time by Plackett and Burman in 1946 and was implemented by G. Taguchi, 1987

Orthogonal array testing: Mathematical technique to determine which variations of parameters need to be tested. [William E. Lewis, 2000]

Oracle. Test Oracle: a mechanism to produce the predicted outcomes to compare with the actual outcomes of the software under test [from BS7925-1]
Parallel Testing. Testing a new or an alternate data processing system with the same source data that is used in another system. The other system is considered as the standard of comparison. Syn: parallel run.[ISO]

Penetration testing. The process of attacking a host from outside to ascertain remote security vulnerabilities. Other responsibilities of professional penetration tester is to enforce the countermeasure's for certain types of known attacks and vulnerabilities.

Performance Testing. Testing conducted to evaluate the compliance of a system or component with specific performance requirements [BS7925-1]

Performance testing can be undertaken to: 1) show that the system meets specified performance objectives, 2) tune the system, 3) determine the factors in hardware or software that limit the system's performance, and 4) project the system's future load- handling capacity in order to schedule its replacements" [Software System Testing and Quality Assurance. Beizer, 1984, p. 256]

Postmortem. Self-analysis of interim or fully completed testing activities with the goal of creating improvements to be used in future.[Scott Loveland, 2005]

Preventive Testing Building test cases based upon the requirements specification prior to the creation of the code, with the express purpose of validating the requirements [Systematic Software Testing by Rick D. Craig and Stefan P. Jaskiel 2002]

Prior Defect History Testing. Test cases are created or rerun for every defect found in prior tests of the system. [William E. Lewis, 2000]
Qualification Testing. (IEEE) Formal testing, usually conducted by the developer for the consumer, to demonstrate that the software meets its specified requirements. See: acceptance testing.

Quality. The degree to which a program possesses a desired combination of attributes that enable it to perform its specified end use.

Quality Assurance (QA) Consists of planning, coordinating and other strategic activities associated with measuring product quality against external requirements and specifications (process-related activities).

Quality Control (QC) Consists of monitoring, controlling and other tactical activities associated with the measurement of product quality goals.

Our definition of Quality: Achieving the target (not conformance to requirements as used by many authors) & minimizing the variability of the system under test
Race condition defect. Many concurrent defects result from data-race conditions. A data-race condition may be defined as two accesses to a shared variable, at least one of which is a write, with no mechanism used by either to prevent simultaneous access. However, not all race conditions are defects.

Recovery testing Testing how well a system recovers from crashes, hardware failures, or other catastrophic problems.

Regression Testing. Testing conducted for the purpose of evaluating whether or not a change to the system (all CM items) has introduced a new failure. Regression testing is often accomplished through the construction, execution and analysis of product and system tests.

Regression Testing. - testing that is performed after making a functional improvement or repair to the program. Its purpose is to determine if the change has regressed other aspects of the program [Glenford J.Myers, 1979]
More about Regression Testing

Reengineering. The process of examining and altering an existing system to reconstitute it in a new form. May include reverse engineering (analyzing a system and producing a representation at a higher level of abstraction, such as design from code), restructuring (transforming a system from one representation to another at the same level of abstraction), recommendation (analyzing a system and producing user and support documentation), forward engineering (using software products derived from an existing system, together with new requirements, to produce a new system), and translation (transforming source code from one language to another or from one version of a language to another).

Reference testing. A way of deriving expected outcomes by manually validating a set of actual outcomes. A less rigorous alternative to predicting expected outcomes in advance of test execution. [Dorothy Graham, 1999]
Reliability testing. Verify the probability of failure free operation of a computer program in a specified environment for a specified time.

Reliability of an object is defined as the probability that it will not fail under specified conditions, over a period of time. The specified conditions are usually taken to be fixed, while the time is taken as an independent variable. Thus reliability is often written R(t) as a function of time t, the probability that the object will not fail within time t.

Any computer user would probably agree that most software is flawed, and the evidence for this is that it does fail. All software flaws are designed in -- the software does not break, rather it was always broken. But unless conditions are right to excite the flaw, it will go unnoticed -- the software will appear to work properly. [Professor Dick Hamlet. Ph.D.]

Range Testing. For each input identifies the range over which the system behavior should be the same. [William E. Lewis, 2000]

Risk-Based Testing: Any testing organized to explore specific product risks.[James Bach website]

Risk management. An organized process to identify what can go wrong, to quantify and access associated risks, and to implement/control the appropriate approach for preventing or handling each risk identified.

Robust test. A test, that compares a small amount of information, so that unexpected side effects are less likely to affect whether the test passed or fails. [Dorothy Graham, 1999]
Sanity Testing - typically an initial testing effort to determine if a new software version is performing well enough to accept it for a major testing effort. For example, if the new software is often crashing systems, bogging down systems to a crawl, or destroying databases, the software may not be in a 'sane' enough condition to warrant further testing in its current state.
Scalability testing is a subtype of performance test where performance requirements for response time, throughput, and/or utilization are tested as load on the SUT is increased over time. [Load Testing Terminology by Scott Stirling ]

Scenario-Based Testing. Scenario-based testing is one way to document the software specifications and requirements for a project. Scenario-based testing takes each user scenario and develops tests that verify that a given scenario works. Scenarios focus on the main goals and requirements. If the scenario is able to flow from the beginning to the end, then it passes.[Lydia Ash, 2003]

(SDLC) System Development Life Cycle - a phases used to develop, maintain, and replace information systems. Typical phases in the SDLC are: Initiation Phase, Planning Phase, Functional Design Phase, System Design Phase, Development Phase, Integration and Testing Phase, Installation and Acceptance Phase, and Maintenance Phase.
The V-model talks about SDLC (System Development Life Cycle) phases and maps them to various test levels

Security Audit. An examination (often by third parties) of a server's security controls and may be disaster recovery mechanisms.

Sensitive test. A test, that compares a large amount of information, so that it is more likely to defect unexpected differences between the actual and expected outcomes of the test. [Dorothy Graham, 1999]

Server log testing. Examining the server logs after particular actions or at regular intervals to determine if there are problems or errors generated or if the server is entering a faulty state.

Service test. Test software fixes, both individually and bundled together, for software that is already in use by customers. [Scott Loveland, 2005]

Skim Testing A testing technique used to determine the fitness of a new build or release of an AUT to undergo further, more thorough testing. In essence, a "pretest" activity that could form one of the acceptance criteria for receiving the AUT for testing [Testing IT: An Off-the-Shelf Software Testing Process by John Watkins]

Smoke test describes an initial set of tests that determine if a new version of application performs well enough for further testing.[Louise Tamres, 2002]

Sniff test. A quick check to see if any major abnormalities are evident in the software.[Scott Loveland, 2005 ]
Specification-based test. A test, whose inputs are derived from a specification.

Spike testing. to test performance or recovery behavior when the system under test (SUT) is stressed with a sudden and sharp increase in load should be considered a type of load test.[ Load Testing Terminology by Scott Stirling ]

Standards This page lists many standards that can be related to software testing

STEP (Systematic Test and Evaluation Process) Software Quality Engineering's copyrighted testing methodology.

Stability testing. Testing the ability of the software to continue to function, over time and over its full range of use, without failing or causing failure. (see also Reliability testing)

State-based testing Testing with test cases developed by modeling the system under test as a state machine [R. V. Binder, 1999]

State Transition Testing. Technique in which the states of a system are fist identified and then test cases are written to test the triggers to cause a transition from one condition to another state. [William E. Lewis, 2000]

Static testing. Source code analysis. Analysis of source code to expose potential defects.

Statistical testing. A test case design technique in which a model is used of the statistical distribution of the input to construct representative test cases. [BCS]

Stealth bug. A bug that removes information useful for its diagnosis and correction. [R. V. Binder, 1999]

Storage test. Study how memory and space is used by the program, either in resident memory or on disk. If there are limits of these amounts, storage tests attempt to prove that the program will exceed them. [Cem Kaner, 1999, p55]

Streamable Test cases. Test cases which are able to run together as part of a large group. [Scott Loveland, 2005]

Stress / Load / Volume test. Tests that provide a high degree of activity, either using boundary conditions as inputs or multiple copies of a program executing in parallel as examples.

Stress Test. A stress test is designed to determine how heavy a load the Web application can handle. A huge load is generated as quickly as possible in order to stress the application to its limit. The time between transactions is minimized in order to intensify the load on the application, and the time the users would need for interacting with their Web browsers is ignored. A stress test helps determine, for example, the maximum number of requests a Web application can handle in a specific period of time, and at what point the application will overload and break down.[Load Testing by S. Asbock]

Structural Testing. (1)(IEEE) Testing that takes into account the internal mechanism [structure] of a system or component. Types include branch testing, path testing, statement testing. (2) Testing to insure each program statement is made to execute during testing and that each program statement performs its intended function. Contrast with functional testing. Syn: white-box testing, glass-box testing, logic driven testing.

System testing Black-box type testing that is based on overall requirements specifications; covers all combined parts of a system.

System verification test. (SVT). Testing of an entire software package for the first time, with all components working together to deliver the project's intended purpose on supported hardware platforms. [Scott Loveland, 2005]
Table testing. Test access, security, and data integrity of table entries. [William E. Lewis, 2000]
Test Artifact Set. Captures and presents information related to the tests performed.

Test Bed. An environment containing the hardware, instrumentation, simulators, software tools, and other support elements needed to conduct a test [IEEE 610].

Test Case. A set of test inputs, executions, and expected results developed for a particular objective.

Test conditions. The set of circumstances that a test invokes. [Daniel J. Mosley, 2002]

Test Coverage The degree to which a given test or set of tests addresses all specified test cases for a given system or component.
Some Types Of Software Testing Coverage read a description

Test Criteria. Decision rules used to determine whether software item or software feature passes or fails a test.

Test data. The actual (sets of) values used in the test or that are necessary to execute the test. Test data instantiates the condition being tested (as input or as pre-existing data) and is used to verify that a specific requirement has been successfully implemented (comparing actual results to the expected results). [Daniel J. Mosley, 2002]

Test Documentation. (IEEE) Documentation describing plans for, or results of, the testing of a system or component, Types include test case specification, test incident report, test log, test plan, test procedure, test report.
Test Driver A software module or application used to invoke a test item and, often, provide test inputs (data), control and monitor execution. A test driver automates the execution of test procedures.

Test-driven development (TDD) Is an evolutionary approach to development which combines test-first development where you write a test before you write just enough production code to fulfill that test and refactoring.[Beck 2003; Astels 2003]

Comparing Test-driven development (TDD) and agile model-driven development (AMDD)

Test environment bug. Bug class indicating that some test environment is found to be insufficient to support some test type or inconsistent with its specification. [DOD STD 2167A].

Test Harness A system of test drivers and other tools to support test execution (e.g., stubs, executable test cases, and test drivers). See: test driver.

Test Inputs. Artifacts from work processes that are used to identify and define actions that occur during testing. These artifacts may come from development processes that are external to the test group. Examples include Functional Requirements Specifications and Design Specifications. They may also be derived from previous testing phases and passed to subsequent testing activities.[Daniel J. Mosley, 2002]

Test Idea: an idea for testing something.[James Bach]

Test Item. A software item which is the object of testing.[IEEE]

Test Log A chronological record of all relevant details about the execution of a test.[IEEE]

Test logistics: the set of ideas that guide the application of resources to fulfilling the test strategy.[James Bach]

Test Plan. A high-level document that defines a testing project so that it can be properly measured and controlled. It defines the test strategy and organized elements of the test life cycle, including resource requirements, project schedule, and test requirements
Test Procedure. A document, providing detailed instructions for the [manual] execution of one or more test cases. [BS7925-1] Often called - a manual test script.

Test Results. Data captured during the execution of test and used in calculating the different key measures of testing.[Daniel J. Mosley, 2002]

Test Rig A flexible combination of hardware, software, data, and interconnectivity that can be configured by the Test Team to simulate a variety of different Live Environments on which an AUT can be delivered.[Testing IT: An Off-the-Shelf Software Testing Process by John Watkins ]

Test Script. The computer readable instructions that automate the execu- tion of a test procedure (or portion of a test procedure). Test scripts may be created (recorded) or automatically generated using test automation tools, programmed using a programming language, or created by a combination of recording, generating, and programming.[Daniel J. Mosley, 2002]

Test strategy. Describes the general approach and objectives of the test activities. [Daniel J. Mosley, 2002]
Test Status. The assessment of the result of running tests on software.

Test Stub A dummy software component or object used (during development and testing) to simulate the behaviour of a real component. The stub typically provides test output.

Test Suites A test suite consists of multiple test cases (procedures and data) that are combined and often managed by a test harness.

Test technique: test method; a heuristic or algorithm for designing and/or executing a test; a recipe for a test. [James Bach]

Test Tree. A physical implementation of Test Suite. [Dorothy Graham, 1999]

Testability. Attributes of software that bear on the effort needed for validating the modified software [ISO 8402]

Testability Hooks. Those functions, integrated in the software that can be invoked through primarily undocumented interfaces to drive specific processing which would otherwise be difficult to exercise. [Scott Loveland, 2005]

Testing. The execution of tests with the intent of providing that the system and application under test does or does not perform according to the requirements specification.

(TPI) Test Process Improvement. A method for baselining testing processes and identifying process improvement opportunities, using a static model developed by Martin Pol and Tim Koomen.
Test Suite. The set of tests that when executed instantiate a test scenario.[Daniel J. Mosley, 2002]

Test Workspace. Private areas where testers can install and test code in accordance with the project's adopted standards in relative isolation from the developers.[Daniel J. Mosley, 2002]
Thread Testing. A testing technique used to test the business functionality or business logic of the AUT in an end-to-end manner, in much the same way a User or an operator might interact with the system during its normal use.[Testing IT: An Off-the-Shelf Software Testing Process by John Watkins ]

Timing and Serialization Problems. A class of software defect, usually in multithreaded code, in which two or more tasks attempt to alter a shared software resource without properly coordinating their actions. Also known as Race Conditions.[Scott Loveland, 2005]

Transient bug. A bug which is evident for a short period of time. see aperiodic bug. [Peter Farrell-Vinay 2008]

Translation testing. See internationalization testing.

Thrasher. A type of program used to test for data integrity errors on mainframe system. The name is derived from the first such program, which deliberately generated memory thrashing (the overuse of large amount of memory, leading to heavy paging or swapping) while monitoring for corruption. [Scott Loveland, 2005]

Top Down Testing Technique read a description
Unit Testing. Testing performed to isolate and expose faults and failures as soon as the source code is available, regardless of the external interfaces that may be required. Oftentimes, the detailed design and requirements documents are used as a basis to compare how and what the unit is able to perform. White and black-box testing methods are combined during unit testing.

Usability testing. Testing for 'user-friendliness'. Clearly this is subjective, and will depend on the targeted end-user or customer.
Validation. The comparison between the actual characteristics of something (e.g. a product of a software project and the expected characteristics).Validation is checking that you have built the right system.

Variance. A variance is an observable and measurable difference between an actual result and an expected result.

Verification The comparison between the actual characteristics of something (e.g. a product of a software project) and the specified characteristics.Verification is checking that we have built the system right.

Volume testing. Testing where the system is subjected to large volumes of data.[BS7925-1]
Walkthrough In the most usual form of term, a walkthrough is step by step simulation of the execution of a procedure, as when walking through code line by line, with an imagined set of inputs. The term has been extended to the review of material that is not procedural, such as data descriptions, reference manuals, specifications, etc.

Walkthroughs versus Inspections This page lists some recomendations that can be related to Walkthrough and Inspection

White Box Testing (glass-box). Testing is done under a structural testing strategy and require complete access to the object's structure¡that is, the source code.[B. Beizer, 1995 p8].