Tuesday, April 30, 2024

Converting Epoch (Unix) Time to Date/Time Using Excel for Jmeter results

In the field of performance testing, interpreting raw data often includes converting epoch or UNIX time into a human-readable format. Excel offers a straightforward way to handle this conversion. Here's a simplified guide:

A client recently requested response time data with timestamps in an easy-to-read format. I used JMeter to conduct the testing and saved the results in a .jtl file. The steps below explain how to transform UNIX timestamps into a digestible date and time format using Excel.

Step 1: Access and Prepare the Data

  • Open the JMeter .jtl log file in a text editor such as Notepad or Notepad++.
  • Copy the content of the file and paste it into a new Excel spreadsheet.

Step 2: Split the Data

  • Highlight all the data in the Excel sheet.
  • Go to the 'Data' tab and choose 'Text to Columns.'
  • In the 'Convert Text to Columns Wizard,' select 'Delimited' and click 'Next.'
  • Choose 'Comma' as the delimiter and follow the wizard through to the end.

Step 3: Format the Timestamp Column

  • Change the format of the timestamp column (usually column A) from 'General' to 'Number' by right-clicking the column and selecting 'Format Cells' to make the adjustment.

Step 4: Parse the Timestamp

  • Insert a new column between the timestamp column and the next column.
  • In the first cell of the new column (e.g., B2), enter the formula =LEFT(A2,10) & "." & RIGHT(A2,3) to separate the UNIX timestamp into seconds and milliseconds.
  • Drag the formula down to apply it to the entire column.

Step 5: Convert to Date/Time Format

  • Add another new column between the new column and the next one.
  • In the first cell of this additional column (e.g., C2), input the formula =(((B2/60)/60)/24)+DATE(1970,1,1) to convert the parsed timestamp into a readable date/time format.
  • Drag the formula down to fill in the rest of the column.

Step 6: Format the Date/Time Column

  • Adjust the format of the date/time column (e.g., column C) to your preference using the 'Custom' format option.

Step 7: Review and Finalize

  • Check that the date/time data in column C is now presented in a human-readable format, converted from the original UNIX timestamps.
Happy Testing !

Monday, April 29, 2024

How to Convert Postman Collections to JMeter Scripts

Are you looking to transition your API collection from Postman to Apache JMeter for performance testing? Whether you prefer an online solution or are comfortable with offline setup, we've got you covered. In this guide, we'll explore two methods to seamlessly convert your Postman collections into JMeter scripts.

Export Postman Collection:
  1. Open Postman and navigate to the collection you want to export.
  2. Click on the three-dot menu next to the collection name and select Export.
  3. In the export dialog, choose the format v2.1 or v2.0 (either should work, but v2.1 is preferred for the latest collections).
  4. Click Export to save the collection as a JSON file.

Option 1: Loadium Application (Online)

Loadium, a versatile load testing tool supporting various performance test types, including JMeter, Gatling, and Selenium, offers a convenient online solution for converting Postman collections to JMX projects.

Here's how it works:

1. Visit Loadium's website and sign up for a trial account.
2. Once logged in, navigate to the "More Options" section and select "Convert to JMX."
3. Upload your Postman collection.
4. Follow the on-screen prompts to initiate the conversion process.
5. Download the generated JMX file.
6. Open Apache JMeter and import the downloaded JMX file to begin your performance testing.

Option 2: postman2jmx (Offline)
  1. Typically, you would run a command like java -jar postman-to-jmx-converter.jar -i postman_collection.json -o jmeter_script.jmx.
  2. Replace postman_collection.json with the path to your exported Postman collection and jmeter_script.jmx with the desired output file name.
If you prefer an offline approach and are comfortable with a bit of setup, postman2jmx, available on GitHub, provides an excellent solution for converting Postman collections to JMeter scripts.

Here's how to set it up on MacOS:

1. Install Homebrew, Maven, and Java if not already installed.
2. Clone the postman2jmx repository from GitHub.
3. Build postman2jmx using Maven.
4. Navigate to the "target" folder in the terminal.
5. Convert your Postman collection to a JMX file using the provided command.
6. Open the converted JMX file in Apache JMeter to proceed with your performance testing.

Happy testing!

Sunday, April 28, 2024

Implementing Kerberos Authentication with LoadRunner | Kerberos Authentication

Kerberos is a network authentication protocol designed to provide strong authentication for client/server applications. It is commonly used in enterprise environments to authenticate users and services over an untrusted network, such as the internet. Kerberos was developed at MIT in the 1980s as part of Project Athena, and it is named after the three-headed dog from Greek mythology.

Key Concepts:

  1. Key Distribution Center (KDC):

    • The KDC is the heart of a Kerberos setup, responsible for authenticating users and services.
    • It consists of two main parts: the Authentication Server (AS) and the Ticket Granting Server (TGS).
    • The AS handles the initial authentication of users, while the TGS issues service tickets for authenticated users to access services.
  2. Tickets:

    • Ticket-Granting Ticket (TGT): When a user authenticates with the AS, they receive a TGT, which allows them to request service tickets from the TGS.
    • Service Ticket: Once a user has a TGT, they can request a service ticket from the TGS for a specific service. The service ticket is presented to the service to gain access.
  3. Authenticator:

    • An authenticator is a piece of encrypted data sent alongside a ticket to verify the user's identity.
  4. Session Keys:

    • Kerberos uses session keys for encrypted communication between the client, server, and KDC.

How Kerberos Works:

  1. User Authentication:

    • A user starts by logging in and requesting authentication from the AS.
    • The AS verifies the user's credentials and, if successful, provides the user with a TGT.
  2. Requesting a Service Ticket:

    • When the user wants to access a service, they present their TGT to the TGS and request a service ticket for the desired service.
    • If the TGS verifies the TGT, it issues a service ticket to the user.
  3. Accessing a Service:

    • The user then presents the service ticket to the desired service.
    • The service verifies the ticket and, if valid, grants the user access.

Advantages of Kerberos:

  • Strong Authentication: Uses cryptographic techniques for secure authentication.
  • Single Sign-On (SSO): Once authenticated, users can access multiple services without re-entering credentials.
  • Secure Communication: Communication between components is encrypted using session keys.

Challenges:

  • Clock Synchronization: Kerberos relies on time-sensitive tickets, so all parties must have synchronized clocks.
  • Complex Configuration: Setting up and maintaining a Kerberos environment can be complex.
Implementing Kerberos Authentication with LoadRunner:

Kerberos authentication provides mutual authentication between users and services over a network. It consists of three key components:
  1. - Client
  2. - Key Distribution Center (KDC): Contains both the Authentication Server (AS) and Ticket Granting Server (TGS).
  3. - Server

 Kerberos Authentication Flow:

1. Client Request: The client sends a request with its username to the Authentication Server (AS) for a ticket-granting ticket (TGT).
2. Authentication Server Response: The AS sends back a session key and TGT based on the client’s information.
3. Ticket Granting Server (TGS): The client sends the TGT to the TGS, which creates a service ticket and sends it back to the client.
4. Client and Server Communication: The client uses the service ticket to authenticate with the server and establish a session.

Configuring the Krb5 Configuration File:
The `krb5.ini` configuration file contains detailed information about the Kerberos realm, KDC/AS addresses, keytab file, and default settings like encryption algorithms.

Keytab File: Contains Kerberos principal and encrypted keys derived from the Kerberos password.
Obtain this file from the Kerberos maintenance team or create it yourself.

 Configuration Steps:
1. Place krb5.ini: Copy the `krb5.ini` file to the Windows folder (`C:\WINDOWS\` in Windows XP) or specify an environment variable `KRB5_CONFIG` with the file path.
2. Enable Integrated Authentication: In runtime settings, go to Preferences > Options and change "Enable integrated authentication" to "Yes".
3. Set User Credentials: At the start of the script, use the `web_set_user` function to specify login and password strings for web/proxy server authentication.




 LoadRunner Runtime Settings:

In run time setting-> preferences-> options, change "Enable integrated authentication" to Yes.



Adjust these configurations as needed based on your specific application and LoadRunner versions.

Saturday, April 27, 2024

Understanding the `vufd` Option in LoadRunner Enterprise (LRE) | LoadRunner vufd

What Is the `vufd` Option?

The `vufd` option, short for Virtual User Functional Data, is a set of parameters and settings in LoadRunner Enterprise that allows users to control how virtual users interact with an application during performance tests. Virtual users (VUs) simulate real users, and the `vufd` option helps manage their functional data, which includes the input and output data for these simulated users.

 Importance of `vufd` in Performance Testing:

1. Realistic User Simulation: By managing the functional data for virtual users, the `vufd` option helps simulate realistic user interactions with an application. This provides a more accurate representation of how the application will perform in real-world scenarios.

2. Test Data Management: Proper management of test data is crucial for accurate performance testing. The `vufd` option allows users to control the data used by virtual users, ensuring consistency and reliability in test results.

3. Enhanced Analysis: With the `vufd` option, testers can collect and analyze data from virtual users more effectively. This data can provide insights into application performance, identify bottlenecks, and guide optimization efforts.

How to Leverage the `vufd` Option:

1. Configure Virtual Users: Start by setting up virtual users with realistic profiles and behaviors. Use the `vufd` option to manage the functional data these users interact with during testing.

2. Use Parameterization: Parameterize the virtual user scripts to simulate different user scenarios and inputs. This allows you to test a variety of use cases and better understand application performance.

3. Monitor Performance: During testing, closely monitor the performance of the application as virtual users interact with it. Look for patterns in the data collected using the `vufd` option.

4. Analyze Results: After the test, analyze the data collected from virtual users. Identify any issues with application performance and use the insights to optimize the application for better user experiences.

5. Iterate and Improve: Based on the analysis, make improvements to the application and the test scripts. Repeat the process to continue refining performance and ensuring a high-quality user experience.

The `vufd` option in LoadRunner Enterprise is a key feature that enables testers to manage Virtual User Functional Data effectively. By leveraging this option, you can create realistic user simulations, manage test data, and gain valuable insights into application performance. Whether you are testing a web application, a mobile app, or an API, the `vufd` option can help you ensure your application performs optimally under varying levels of load.

How to Save the downloaded file to local system in load runner

In certain scenarios, downloading files becomes a requirement. Instead of needing to store the files on your local machine for later verification, you may want to measure the transaction time of the download process. To do this, you can capture data from the server response and save it to a file using standard file operations. Just be sure you know the type of file you will be downloading.

Here's an example of downloading a `.pdf` file and saving it to your local system:


// Declare variables
int filePointer;
long fileSize;

// Create a file for writing
filePointer = fopen("c://test_file.pdf", "wb");

// Start a transaction to measure download time
lr_start_transaction("file_download");

// Increase the parameter size to accommodate the data
web_set_max_html_param_len("100000");

// Use web_reg_save_param with the appropriate boundaries to capture server response data for the download
web_reg_save_param("FILEDATA", "LB=", "RB=", "Search=Body", LAST);

// HTTP request to download the .pdf file
web_url("http://serverURL:port/app/resource/getme.pdf");

// Retrieve the download size
fileSize = web_get_int_property(HTTP_INFO_DOWNLOAD_SIZE);

// Write the captured data to an output file
fwrite(lr_eval_string("{FILEDATA}"), fileSize, 1, filePointer);

// End the transaction
lr_end_transaction("file_download", LR_AUTO);

// Close the file
fclose(filePointer);

This code snippet demonstrates how to use file operations and LoadRunner functions to download a PDF file from a server, capture the download data, save it to a local file, and measure the transaction time. By following this process, you can efficiently analyze the download performance without needing to store the downloaded files on your machine for later verification.

Python Script for Dynamically Generating LoadRunner Transaction Names | Automate Transaction Names in LoadRunner

In performance testing and monitoring, it's crucial to understand how different parts of your application are performing under various conditions. Monitoring transactions around key sections of code, such as API calls and critical functions, helps identify performance bottlenecks and areas for optimization. LoadRunner provides powerful tools for performance testing, including functions like `lr_start_transaction` and `lr_end_transaction` that enable you to measure the execution time of specific code blocks. However, adding these functions manually can be a time-consuming and error-prone process. To streamline this task, I've developed a script that automatically inserts transaction monitoring statements around key function calls in a C file. This script simplifies the process and helps you achieve more efficient and consistent performance testing results. Let's explore how this script works.

import re

# Open the data file
with open('C:\\Users\\easyloadrunner\\Downloads\\FeatureChange-V1\\Action.c', 'r') as file:
    data = file.read()

# Split the data into lines
lines = data.split('\n')

# Initialize a variable to store the captured value
captured_value = ""

# Open the output file
with open('C:\\Users\\easyloadrunner\\Downloads\\FeatureChange-V1\\ModifiedAction.C', 'w') as output_file:
    # Loop through each line
    for line in lines:
        # Check if the line contains "web_custom_request "
        if "web_custom_request(" in line:
            # Use regular expression to capture the dynamic value
            captured_value = re.search(r'web_custom_request\("(.+?)"', line).group(1)
            # Add the captured value to the start of the transaction
            new_line = 'lr_start_transaction("' + captured_value + '");\n' + line
        elif "LAST);" in line:
            # Add the end of the transaction
            new_line = line + '\n' + 'lr_end_transaction("' + captured_value + '", LR_AUTO);'
        else:
            new_line = line
        # Write the modified line to the output file
        output_file.write(new_line + '\n')

BatchJob Performance Testing | Batch Job Performance Testing with JMeter

Batch job performance testing involves evaluating the efficiency, scalability, and reliability of batch processes. Start by setting clear goals and objectives for the testing, such as response times and throughput rates. Identify key batch jobs for testing based on business impact and usage patterns. Design test scenarios reflecting normal, peak, and edge cases, and prepare test data sets that resemble production data.

Automate test execution using tools like Apache JMeter and configure the tests to run batch jobs using appropriate samplers. Monitor key performance metrics such as response times, CPU and memory usage, and disk I/O. Conduct load and stress tests to evaluate performance under various conditions, identifying bottlenecks and areas for improvement.

Analyze test results, optimize batch jobs as needed, and re-test to verify improvements. Document findings and recommendations for stakeholders, and incorporate regular performance testing as part of continuous improvement to maintain efficient batch job performance.Batch jobs are automated tasks that run without direct user involvement. They are vital in many sectors for operations such as data backup, processing, report creation, system upkeep, file management, automated testing, database administration, financial transactions, job scheduling, and data archiving.

Types of Batch Jobs:

1. Data Backup and Recovery Jobs: These tasks involve regularly backing up crucial data and databases to ensure data safety and restore points in case of data loss. For instance, a job might back up a company's customer data every night.

2. Data Processing and ETL Jobs: These jobs are used for data extraction, transformation, and loading tasks in data warehousing scenarios. For example, a job may extract data from various sources, transform it to a standard format, and load it into a data warehouse.

3. Report Generation Jobs: These generate complex reports during off-peak hours to minimize impact on system performance. For instance, a job might generate monthly sales reports for management overnight.

4. System Maintenance Jobs: These handle routine maintenance tasks like clearing temporary files and rotating log files. For example, a job might clean up old log files and free up disk space weekly.

5. File Processing Jobs: These tasks manage large file volumes, such as converting file formats and validating data. For instance, a job might convert image files from one format to another and validate data consistency.

6. Automated Testing Jobs: These run tests on applications automatically, ensuring they work as expected without constant user input. For example, a job might perform end-to-end tests on a web application every night.

7. Database Maintenance Jobs: These perform tasks like reindexing databases and checking data integrity. For instance, a job might run a routine maintenance check on a database to ensure optimal performance.

8. Batch Processing in Financial Systems: These jobs handle financial tasks such as payroll processing or generating financial reports. For example, a job might process employee payroll data and generate paychecks every two weeks.

9. Job Scheduling and Automation: These automate various tasks at specific times or intervals. For example, a job might automatically generate backups every evening at 11 p.m.

10. Data Archiving Jobs: These handle archiving old data to maintain optimal database performance and storage usage. For instance, a job might archive data older than five years to a secondary storage location.

Creating a Basic Batch Job using JMeter:

Example: An example of a simple batch job is copying files from one folder to another written in python script. This operation might be scheduled to run every day at midnight, ensuring data consistency and security.

Testing the Batch Job with JMeter:
JMeter can be used to simulate and test the performance of batch jobs. By creating a thread group and adding an OS processor sampler, you can run the batch job command within JMeter. Adding listeners like Aggregate Graph and Summary Report allows you to evaluate performance metrics such as average response time and percentiles.




Metrics for Batch Processing Performance:
  •  Throughput: Number of tasks or commands completed in a specified time period.
  •  Latency: Time taken for a task or command to be submitted and completed.
  •  Utilization: Percentage of available resources (e.g. CPU, memory, disk, network) used during the batch process.
  •  Error Rate: Percentage of tasks or commands that fail or produce incorrect results.
  •  Additional Metrics: May include cost, reliability, scalability, or security, depending on your objectives.
 Data Collection and Analysis:

  • Logging: Record events and messages generated by the batch process in a file or database.
  • Monitoring: Observe and measure performance and status of the batch process and resources used in real time or periodically.
  • Profiling: Identify and measure time and resources consumed by each part or function of the batch process.
  • Auditing: Verify and validate output and results of the batch process against expected standards and criteria.
Use collected data to identify bottlenecks, errors, inefficiencies, or anomalies affecting performance and find root causes and solutions.

Optimizing and Improving Batch Processing Performance:
  • Parallelization: Divide the batch process into smaller, independent tasks or commands to run simultaneously on multiple processors or machines.
  • Scheduling: Determine optimal time and frequency to run the batch process based on demand, availability, priority, or dependency of tasks or commands.
  • Tuning: Adjust parameters, settings, or options of the batch process, resources, or environment to enhance performance and quality.
  •  Testing: Evaluate and compare performance and results before and after applying optimization techniques to ensure they meet expectations and requirements.
   Happy Testing!