Monday 29 April 2024

How to Convert Postman Collections to JMeter Scripts

Are you looking to transition your API collection from Postman to Apache JMeter for performance testing? Whether you prefer an online solution or are comfortable with offline setup, we've got you covered. In this guide, we'll explore two methods to seamlessly convert your Postman collections into JMeter scripts.

Export Postman Collection:
  1. Open Postman and navigate to the collection you want to export.
  2. Click on the three-dot menu next to the collection name and select Export.
  3. In the export dialog, choose the format v2.1 or v2.0 (either should work, but v2.1 is preferred for the latest collections).
  4. Click Export to save the collection as a JSON file.

Option 1: Loadium Application (Online)

Loadium, a versatile load testing tool supporting various performance test types, including JMeter, Gatling, and Selenium, offers a convenient online solution for converting Postman collections to JMX projects.

Here's how it works:

1. Visit Loadium's website and sign up for a trial account.
2. Once logged in, navigate to the "More Options" section and select "Convert to JMX."
3. Upload your Postman collection.
4. Follow the on-screen prompts to initiate the conversion process.
5. Download the generated JMX file.
6. Open Apache JMeter and import the downloaded JMX file to begin your performance testing.

Option 2: postman2jmx (Offline)
  1. Typically, you would run a command like java -jar postman-to-jmx-converter.jar -i postman_collection.json -o jmeter_script.jmx.
  2. Replace postman_collection.json with the path to your exported Postman collection and jmeter_script.jmx with the desired output file name.
If you prefer an offline approach and are comfortable with a bit of setup, postman2jmx, available on GitHub, provides an excellent solution for converting Postman collections to JMeter scripts.

Here's how to set it up on MacOS:

1. Install Homebrew, Maven, and Java if not already installed.
2. Clone the postman2jmx repository from GitHub.
3. Build postman2jmx using Maven.
4. Navigate to the "target" folder in the terminal.
5. Convert your Postman collection to a JMX file using the provided command.
6. Open the converted JMX file in Apache JMeter to proceed with your performance testing.

Happy testing!

Sunday 28 April 2024

Implementing Kerberos Authentication with LoadRunner | Kerberos Authentication

Kerberos is a network authentication protocol designed to provide strong authentication for client/server applications. It is commonly used in enterprise environments to authenticate users and services over an untrusted network, such as the internet. Kerberos was developed at MIT in the 1980s as part of Project Athena, and it is named after the three-headed dog from Greek mythology.

Key Concepts:

  1. Key Distribution Center (KDC):

    • The KDC is the heart of a Kerberos setup, responsible for authenticating users and services.
    • It consists of two main parts: the Authentication Server (AS) and the Ticket Granting Server (TGS).
    • The AS handles the initial authentication of users, while the TGS issues service tickets for authenticated users to access services.
  2. Tickets:

    • Ticket-Granting Ticket (TGT): When a user authenticates with the AS, they receive a TGT, which allows them to request service tickets from the TGS.
    • Service Ticket: Once a user has a TGT, they can request a service ticket from the TGS for a specific service. The service ticket is presented to the service to gain access.
  3. Authenticator:

    • An authenticator is a piece of encrypted data sent alongside a ticket to verify the user's identity.
  4. Session Keys:

    • Kerberos uses session keys for encrypted communication between the client, server, and KDC.

How Kerberos Works:

  1. User Authentication:

    • A user starts by logging in and requesting authentication from the AS.
    • The AS verifies the user's credentials and, if successful, provides the user with a TGT.
  2. Requesting a Service Ticket:

    • When the user wants to access a service, they present their TGT to the TGS and request a service ticket for the desired service.
    • If the TGS verifies the TGT, it issues a service ticket to the user.
  3. Accessing a Service:

    • The user then presents the service ticket to the desired service.
    • The service verifies the ticket and, if valid, grants the user access.

Advantages of Kerberos:

  • Strong Authentication: Uses cryptographic techniques for secure authentication.
  • Single Sign-On (SSO): Once authenticated, users can access multiple services without re-entering credentials.
  • Secure Communication: Communication between components is encrypted using session keys.

Challenges:

  • Clock Synchronization: Kerberos relies on time-sensitive tickets, so all parties must have synchronized clocks.
  • Complex Configuration: Setting up and maintaining a Kerberos environment can be complex.
Implementing Kerberos Authentication with LoadRunner:

Kerberos authentication provides mutual authentication between users and services over a network. It consists of three key components:
  1. - Client
  2. - Key Distribution Center (KDC): Contains both the Authentication Server (AS) and Ticket Granting Server (TGS).
  3. - Server

 Kerberos Authentication Flow:

1. Client Request: The client sends a request with its username to the Authentication Server (AS) for a ticket-granting ticket (TGT).
2. Authentication Server Response: The AS sends back a session key and TGT based on the client’s information.
3. Ticket Granting Server (TGS): The client sends the TGT to the TGS, which creates a service ticket and sends it back to the client.
4. Client and Server Communication: The client uses the service ticket to authenticate with the server and establish a session.

Configuring the Krb5 Configuration File:
The `krb5.ini` configuration file contains detailed information about the Kerberos realm, KDC/AS addresses, keytab file, and default settings like encryption algorithms.

Keytab File: Contains Kerberos principal and encrypted keys derived from the Kerberos password.
Obtain this file from the Kerberos maintenance team or create it yourself.

 Configuration Steps:
1. Place krb5.ini: Copy the `krb5.ini` file to the Windows folder (`C:\WINDOWS\` in Windows XP) or specify an environment variable `KRB5_CONFIG` with the file path.
2. Enable Integrated Authentication: In runtime settings, go to Preferences > Options and change "Enable integrated authentication" to "Yes".
3. Set User Credentials: At the start of the script, use the `web_set_user` function to specify login and password strings for web/proxy server authentication.




 LoadRunner Runtime Settings:

In run time setting-> preferences-> options, change "Enable integrated authentication" to Yes.



Adjust these configurations as needed based on your specific application and LoadRunner versions.

Saturday 27 April 2024

Understanding the `vufd` Option in LoadRunner Enterprise (LRE) | LoadRunner vufd

What Is the `vufd` Option?

The `vufd` option, short for Virtual User Functional Data, is a set of parameters and settings in LoadRunner Enterprise that allows users to control how virtual users interact with an application during performance tests. Virtual users (VUs) simulate real users, and the `vufd` option helps manage their functional data, which includes the input and output data for these simulated users.

 Importance of `vufd` in Performance Testing:

1. Realistic User Simulation: By managing the functional data for virtual users, the `vufd` option helps simulate realistic user interactions with an application. This provides a more accurate representation of how the application will perform in real-world scenarios.

2. Test Data Management: Proper management of test data is crucial for accurate performance testing. The `vufd` option allows users to control the data used by virtual users, ensuring consistency and reliability in test results.

3. Enhanced Analysis: With the `vufd` option, testers can collect and analyze data from virtual users more effectively. This data can provide insights into application performance, identify bottlenecks, and guide optimization efforts.

How to Leverage the `vufd` Option:

1. Configure Virtual Users: Start by setting up virtual users with realistic profiles and behaviors. Use the `vufd` option to manage the functional data these users interact with during testing.

2. Use Parameterization: Parameterize the virtual user scripts to simulate different user scenarios and inputs. This allows you to test a variety of use cases and better understand application performance.

3. Monitor Performance: During testing, closely monitor the performance of the application as virtual users interact with it. Look for patterns in the data collected using the `vufd` option.

4. Analyze Results: After the test, analyze the data collected from virtual users. Identify any issues with application performance and use the insights to optimize the application for better user experiences.

5. Iterate and Improve: Based on the analysis, make improvements to the application and the test scripts. Repeat the process to continue refining performance and ensuring a high-quality user experience.

The `vufd` option in LoadRunner Enterprise is a key feature that enables testers to manage Virtual User Functional Data effectively. By leveraging this option, you can create realistic user simulations, manage test data, and gain valuable insights into application performance. Whether you are testing a web application, a mobile app, or an API, the `vufd` option can help you ensure your application performs optimally under varying levels of load.

How to Save the downloaded file to local system in load runner

In certain scenarios, downloading files becomes a requirement. Instead of needing to store the files on your local machine for later verification, you may want to measure the transaction time of the download process. To do this, you can capture data from the server response and save it to a file using standard file operations. Just be sure you know the type of file you will be downloading.

Here's an example of downloading a `.pdf` file and saving it to your local system:


// Declare variables
int filePointer;
long fileSize;

// Create a file for writing
filePointer = fopen("c://test_file.pdf", "wb");

// Start a transaction to measure download time
lr_start_transaction("file_download");

// Increase the parameter size to accommodate the data
web_set_max_html_param_len("100000");

// Use web_reg_save_param with the appropriate boundaries to capture server response data for the download
web_reg_save_param("FILEDATA", "LB=", "RB=", "Search=Body", LAST);

// HTTP request to download the .pdf file
web_url("http://serverURL:port/app/resource/getme.pdf");

// Retrieve the download size
fileSize = web_get_int_property(HTTP_INFO_DOWNLOAD_SIZE);

// Write the captured data to an output file
fwrite(lr_eval_string("{FILEDATA}"), fileSize, 1, filePointer);

// End the transaction
lr_end_transaction("file_download", LR_AUTO);

// Close the file
fclose(filePointer);

This code snippet demonstrates how to use file operations and LoadRunner functions to download a PDF file from a server, capture the download data, save it to a local file, and measure the transaction time. By following this process, you can efficiently analyze the download performance without needing to store the downloaded files on your machine for later verification.

Python Script for Dynamically Generating LoadRunner Transaction Names | Automate Transaction Names in LoadRunner

In performance testing and monitoring, it's crucial to understand how different parts of your application are performing under various conditions. Monitoring transactions around key sections of code, such as API calls and critical functions, helps identify performance bottlenecks and areas for optimization. LoadRunner provides powerful tools for performance testing, including functions like `lr_start_transaction` and `lr_end_transaction` that enable you to measure the execution time of specific code blocks. However, adding these functions manually can be a time-consuming and error-prone process. To streamline this task, I've developed a script that automatically inserts transaction monitoring statements around key function calls in a C file. This script simplifies the process and helps you achieve more efficient and consistent performance testing results. Let's explore how this script works.

import re

# Open the data file
with open('C:\\Users\\easyloadrunner\\Downloads\\FeatureChange-V1\\Action.c', 'r') as file:
    data = file.read()

# Split the data into lines
lines = data.split('\n')

# Initialize a variable to store the captured value
captured_value = ""

# Open the output file
with open('C:\\Users\\easyloadrunner\\Downloads\\FeatureChange-V1\\ModifiedAction.C', 'w') as output_file:
    # Loop through each line
    for line in lines:
        # Check if the line contains "web_custom_request "
        if "web_custom_request(" in line:
            # Use regular expression to capture the dynamic value
            captured_value = re.search(r'web_custom_request\("(.+?)"', line).group(1)
            # Add the captured value to the start of the transaction
            new_line = 'lr_start_transaction("' + captured_value + '");\n' + line
        elif "LAST);" in line:
            # Add the end of the transaction
            new_line = line + '\n' + 'lr_end_transaction("' + captured_value + '", LR_AUTO);'
        else:
            new_line = line
        # Write the modified line to the output file
        output_file.write(new_line + '\n')

BatchJob Performance Testing | Batch Job Performance Testing with JMeter

Batch job performance testing involves evaluating the efficiency, scalability, and reliability of batch processes. Start by setting clear goals and objectives for the testing, such as response times and throughput rates. Identify key batch jobs for testing based on business impact and usage patterns. Design test scenarios reflecting normal, peak, and edge cases, and prepare test data sets that resemble production data.

Automate test execution using tools like Apache JMeter and configure the tests to run batch jobs using appropriate samplers. Monitor key performance metrics such as response times, CPU and memory usage, and disk I/O. Conduct load and stress tests to evaluate performance under various conditions, identifying bottlenecks and areas for improvement.

Analyze test results, optimize batch jobs as needed, and re-test to verify improvements. Document findings and recommendations for stakeholders, and incorporate regular performance testing as part of continuous improvement to maintain efficient batch job performance.Batch jobs are automated tasks that run without direct user involvement. They are vital in many sectors for operations such as data backup, processing, report creation, system upkeep, file management, automated testing, database administration, financial transactions, job scheduling, and data archiving.

Types of Batch Jobs:

1. Data Backup and Recovery Jobs: These tasks involve regularly backing up crucial data and databases to ensure data safety and restore points in case of data loss. For instance, a job might back up a company's customer data every night.

2. Data Processing and ETL Jobs: These jobs are used for data extraction, transformation, and loading tasks in data warehousing scenarios. For example, a job may extract data from various sources, transform it to a standard format, and load it into a data warehouse.

3. Report Generation Jobs: These generate complex reports during off-peak hours to minimize impact on system performance. For instance, a job might generate monthly sales reports for management overnight.

4. System Maintenance Jobs: These handle routine maintenance tasks like clearing temporary files and rotating log files. For example, a job might clean up old log files and free up disk space weekly.

5. File Processing Jobs: These tasks manage large file volumes, such as converting file formats and validating data. For instance, a job might convert image files from one format to another and validate data consistency.

6. Automated Testing Jobs: These run tests on applications automatically, ensuring they work as expected without constant user input. For example, a job might perform end-to-end tests on a web application every night.

7. Database Maintenance Jobs: These perform tasks like reindexing databases and checking data integrity. For instance, a job might run a routine maintenance check on a database to ensure optimal performance.

8. Batch Processing in Financial Systems: These jobs handle financial tasks such as payroll processing or generating financial reports. For example, a job might process employee payroll data and generate paychecks every two weeks.

9. Job Scheduling and Automation: These automate various tasks at specific times or intervals. For example, a job might automatically generate backups every evening at 11 p.m.

10. Data Archiving Jobs: These handle archiving old data to maintain optimal database performance and storage usage. For instance, a job might archive data older than five years to a secondary storage location.

Creating a Basic Batch Job using JMeter:

Example: An example of a simple batch job is copying files from one folder to another written in python script. This operation might be scheduled to run every day at midnight, ensuring data consistency and security.

Testing the Batch Job with JMeter:
JMeter can be used to simulate and test the performance of batch jobs. By creating a thread group and adding an OS processor sampler, you can run the batch job command within JMeter. Adding listeners like Aggregate Graph and Summary Report allows you to evaluate performance metrics such as average response time and percentiles.




Metrics for Batch Processing Performance:
  •  Throughput: Number of tasks or commands completed in a specified time period.
  •  Latency: Time taken for a task or command to be submitted and completed.
  •  Utilization: Percentage of available resources (e.g. CPU, memory, disk, network) used during the batch process.
  •  Error Rate: Percentage of tasks or commands that fail or produce incorrect results.
  •  Additional Metrics: May include cost, reliability, scalability, or security, depending on your objectives.
 Data Collection and Analysis:

  • Logging: Record events and messages generated by the batch process in a file or database.
  • Monitoring: Observe and measure performance and status of the batch process and resources used in real time or periodically.
  • Profiling: Identify and measure time and resources consumed by each part or function of the batch process.
  • Auditing: Verify and validate output and results of the batch process against expected standards and criteria.
Use collected data to identify bottlenecks, errors, inefficiencies, or anomalies affecting performance and find root causes and solutions.

Optimizing and Improving Batch Processing Performance:
  • Parallelization: Divide the batch process into smaller, independent tasks or commands to run simultaneously on multiple processors or machines.
  • Scheduling: Determine optimal time and frequency to run the batch process based on demand, availability, priority, or dependency of tasks or commands.
  • Tuning: Adjust parameters, settings, or options of the batch process, resources, or environment to enhance performance and quality.
  •  Testing: Evaluate and compare performance and results before and after applying optimization techniques to ensure they meet expectations and requirements.
   Happy Testing!

Thursday 25 April 2024

How to execute python script in JMeter? | Python script with Apache JMeter

 You can execute Python scripts in JMeter by using the JSR223 Sampler, which supports scripting languages such as Groovy, JavaScript, and Python. The JSR223 Sampler is part of JMeter's core functionality and can be used to execute scripts within your JMeter test plan. Here's how you can execute a Python script in JMeter using the JSR223 Sampler:

1. Add a Thread Group: First, add a Thread Group to your test plan by right-clicking on the test plan and selecting *Add > Threads (Users) > Thread Group*. Configure the Thread Group settings as per your requirements.

2. Add a JSR223 Sampler: Within the Thread Group, add a JSR223 Sampler by right-clicking on the Thread Group and selecting *Add > Sampler > JSR223 Sampler*.

3. Configure the JSR223 Sampler:

    - Name: Give the sampler a descriptive name.

    - Language: Select "python" from the language dropdown.

    - Script: In the script box, enter the Python code you want to execute. You can either write your script directly in the script box or reference an external Python file.

4. Use External Python Files (Optional):

    - If you want to use an external Python file, you can import the file in the script box using the `import` statement.

    - Ensure the Python file is accessible from the JMeter script. You may need to adjust your classpath to include the directory where the file is located.

5. Add Listeners (Optional): You can add listeners like View Results Tree or Summary Report to view the output of the sampler.

6. Run the Test Plan: Execute the test plan by clicking on the green play button at the top of the JMeter interface. The Python script will be executed as part of the test plan.

Here is an example of how you might use the JSR223 Sampler to execute a Python script:


This script prints a message and performs a simple calculation. You can expand on this script to include more complex operations or integrate it with other parts of your JMeter test plan.

Method 2:

In JMeter, the OS Process Sampler is used to execute operating system commands, including scripts like Python scripts. You can use the OS Process Sampler to execute a Python script as part of your JMeter test plan. 

Here's how you can execute a Python script using the OS Process Sampler:
  • Add a Thread Group: First, add a Thread Group to your test plan by right-clicking on the test plan and selecting Add > Threads (Users) > Thread Group. Configure the Thread Group settings according to your needs.
  • Add an OS Process Sampler: Within the Thread Group, add an OS Process Sampler by right-clicking on the Thread Group and selecting Add > Sampler > OS Process Sampler.
  • Configure the OS Process Sampler:
    • Name: Give the sampler a descriptive name.
    • Command: Specify the command to execute the Python script. You will need to provide the path to the Python executable followed by the path to your Python script.
    • Arguments: If your Python script requires any command-line arguments, provide them here.
    • Working Directory: Set the working directory (optional), which is the directory where the script will execute.
  • For example, if you want to run a Python script called my_script.py located at /path/to/script/ using the Python interpreter located at /usr/bin/python3, your configuration would look like this:
    • Command: /usr/bin/python3
    • Arguments: /path/to/script/my_script.py
    • Working Directory: /path/to/script/ (optional)
  • Add Listeners (Optional): You can add listeners such as View Results Tree or Summary Report to view the output of the OS Process Sampler.
  • Run the Test Plan: Execute the test plan by clicking on the green play button at the top of the JMeter interface. The Python script will be executed as part of the test plan.

Happy Testing !!

Token Management in JMeter for Repeated Requests

Apache JMeter is a powerful Java application used for testing the performance and functional behavior of various applications and protocols. One common scenario in testing is the need to generate a token once and then reuse it for multiple requests. Here’s how you can efficiently manage tokens in JMeter:

Step 1: Generate and Extract Token
First, generate the token in one Thread Group and store it using a Json Extractor. Ensure to configure the request with necessary parameters such as host, path, grant-type, client-id, and client-secret. Then, define the JSON Path expressions to extract the token and store it in a variable, such as `access_token`.

Step 2: Store Token
Using a BeanShell Assertion, store the token using the `setProperty()` function. This function facilitates interThreadCommunication, ensuring the token is accessible across different threads. The syntax would be: `${__setProperty(access_token_new, ${access_token})}`.

Step 3: Reuse Token
In subsequent threads, configure requests to utilize the token. This is achieved by using the `property()` function: `${__property(access_token_new)}`.

Important Note:
- Ensure that the number of users/threads for token generation is set to 1, while for other requests, it can be 1 or more.
- Running the test plan will generate the token once, which can then be reused multiple times for other requests.

Monday 8 April 2024

Export the comments for the transaction controller in JMeter to a CSV file

You can export the comments by following these steps:

1. Add a "Simple Data Writer" listener to your test plan. Right-click on the "Thread Group" or any other element in your test plan, then go to Add -> Listener -> Simple Data Writer.

2. Configure the "Simple Data Writer" listener to save the data in CSV format. In the listener settings, specify the filename with a .csv extension (e.g., comments.csv).

3. In the "Write results to file" section of the "Simple Data Writer" listener, select the checkboxes for the data you want to save. To include comments, make sure to check the box next to "Save comments."

4. Run your test plan. Once the test execution is complete, the comments along with other data will be saved to the specified CSV file.

5. Open the CSV file using a text editor or spreadsheet application to view the comments.

This method allows you to export comments along with other data captured during the test execution.

Method 2:

If you don't find the "Save comments" option is in the "Simple Data Writer" listener directly. Instead, you can achieve this by adding a Beanshell PostProcessor to the Transaction Controller.

Here's how you can do it:

1. Add a "Transaction Controller" to your test plan.

2. Inside the Transaction Controller, add the elements you want to measure.

3. Add a "Beanshell PostProcessor" as a child of the Transaction Controller.

4. In the Beanshell PostProcessor, you can use the following script to save the comments to a CSV file:

```java
import java.io.FileWriter;
import java.io.PrintWriter;

String comment = prev.getSampleLabel();

try {
    FileWriter fileWriter = new FileWriter("comments.csv", true);
    PrintWriter printWriter = new PrintWriter(fileWriter);
    printWriter.println(comment);
    printWriter.close();
} catch (Exception e) {
    log.error("Error while writing to file: " + e.toString());
}
```

Replace "comments.csv" with the desired filename for your CSV file.

5. Run your test plan. After execution, the comments will be saved to the specified CSV file.

This script extracts the sample label (comment) of the transaction and appends it to a CSV file.