Wednesday, February 28, 2024

Handling Failed Text Recognition in Load Runner RTE protocol | RTE protocol Load runner

Imagine you have a LoadRunner script that includes the `TE_wait_text` function to wait for a specific text to appear on the application under test (AUT). You've set a timeout value, but sometimes the desired text may not appear within that time frame. In such cases, the script should gracefully handle the failure, fail the transaction, and continue to the next iteration.

Solution:
To address this scenario, we'll use conditional statements in conjunction with the `TE_wait_text` function to check if the desired text is found within the specified time limit. If the text is not found, we'll explicitly fail the transaction and continue with the script execution.

Example:
Suppose we have a transaction named "Search_Text_Transaction" in our script, and we're waiting for the text "Welcome" to appear on the AUT. Here's how we can handle the scenario:

Code:

int result;
result = TE_wait_text("Welcome", "SEARCH_ALL", 10); // Waiting for "Welcome" text with a timeout of 10 seconds
if(result == 0) {
    lr_output_message("Desired text not found. Failing transaction.");
    lr_fail_trans("Search_Text_Transaction", "Desired text not found.");
}


1. `TE_wait_text`: This function waits for the specified text ("Welcome" in this case) to appear on the AUT. We've set a timeout of 10 seconds.
2. Conditional Statement: We check the return value of `TE_wait_text`. If the value is 0, it means the desired text was not found within the specified time limit.
3. `lr_output_message`: This function outputs a message to the VuGen log indicating that the desired text was not found.
4. `lr_fail_trans`: This function explicitly fails the transaction named "Search_Text_Transaction" and provides a descriptive message.

By implementing conditional statements and explicitly failing transactions, we can handle scenarios where text recognition functions fail to find the desired text within the specified time limit in LoadRunner VuGen scripts. This ensures the scripts continue execution gracefully, even in the face of unexpected failures, resulting in more reliable performance testing.

Monday, February 26, 2024

Error:Unable to create Java VM" when running /Recording a Java Vuser protocal script

Error -22994 indicates that LoadRunner is unable to create a Java Virtual Machine (VM). This error typically occurs when LoadRunner tries to initiate Java-based protocols like Java over HTTP or RMI/IIOP.

Here are some steps to troubleshoot and resolve this issue:

1. Check Java Installation:
Ensure that Java Runtime Environment (JRE) or Java Development Kit (JDK) is installed on the machine where LoadRunner is running. Verify that the Java installation path is correctly set in the system environment variables.

2. Check Java Version Compatibility:
Verify that the installed version of Java is compatible with the version of LoadRunner you are using. Some versions of LoadRunner may have specific requirements regarding Java versions.

3. Check LoadRunner Configuration:
In LoadRunner, go to "Tools" > "Options" > "Java VM". Ensure that the correct Java executable path is specified. Update the path if necessary to point to the correct Java installation directory.

4. Check System Environment Variables:
Ensure that the `JAVA_HOME` environment variable is set correctly to the Java installation directory. Additionally, verify that the `PATH` environment variable includes the directory containing the Java executable.

5. Check LoadRunner Runtime Settings:
If you're running Java-based protocols in a LoadRunner scenario, ensure that the runtime settings for the script are configured correctly. Check the Java Runtime Settings in the "Runtime Settings" section of the VuGen script.

6. Check LoadRunner License:
Ensure that your LoadRunner license includes support for Java-based protocols. Some versions of LoadRunner require specific licenses to use Java protocols.

7. Check System Resources:
Insufficient system resources such as memory or CPU may also cause issues when creating a Java VM. Ensure that the system has enough resources available to run both LoadRunner and Java.

8. Check Firewall/Antivirus Settings:
Sometimes, firewall or antivirus software may block LoadRunner from creating a Java VM. Temporarily disable any firewall or antivirus software and try running LoadRunner again to see if the issue persists.

9. Check LoadRunner Logs:
Check the LoadRunner logs for more detailed error messages or any additional information that might help diagnose the problem further.

Sunday, February 25, 2024

How to call a custom C# dll in VUGen | Calling Custom C# DLLs in VuGen

Sample Blog Post

How to Call a Custom C# DLL in VuGen

Step 1: Create a Custom C# DLL

using System;

namespace CustomFunctions
{
    public class CustomFunctions
    {
        public static string HelloWorld()
        {
            return "Hello, World!";
        }
    }
}

Step 2: Prepare VuGen Environment

Place the generated DLL file in a location accessible by VuGen.

Step 3: Call the Custom DLL in VuGen

extern "C" int lr_load_dll(const char *lpLibFileName);

extern "C" void HelloWorld();

Action()
{
    lr_load_dll("path_to_dll\\CustomFunctions.dll");

    HelloWorld();

    return 0;
}

Example

#include "lrun.h"

extern "C" int lr_load_dll(const char *lpLibFileName);

extern "C" void HelloWorld();

Action()
{
    lr_load_dll("path_to_dll\\CustomFunctions.dll");

    HelloWorld();

    return 0;
}

Syntax

Declaration in C#

using System;

namespace CustomFunctions
{
    public class CustomFunctions
    {
        public static string HelloWorld()
        {
            return "Hello, World!";
        }
    }
}

Declaration in VuGen (C)

extern "C" int lr_load_dll(const char *lpLibFileName);

extern "C" void HelloWorld();

Loading DLL and Calling Function in VuGen (C)

lr_load_dll("path_to_dll\\CustomFunctions.dll");

HelloWorld();

Saturday, February 24, 2024

Error -27796: Failed to connect to server "XXXXXXXXXX.XXXX:443": [10060] Connection timed out "

Error -27796 in LoadRunner indicates a failure to connect to the server within the specified timeout period. This can happen due to various reasons, including network issues, server unavailability, firewall restrictions, or incorrect server configurations. Here are some steps you can take to troubleshoot and resolve this issue:

// Set the SSL/TLS version to TLS 1.2 web_set_sockets_option("SSL_VERSION", "TLS1.2"); // Set HTTP-request connect timeout (sec) in Run Time Settings web_set_timeout(CONNECT, "300"); // Set Connect time-out value to 1600 or more at the beginning of the Action section web_set_timeout("1600");

Sunday, February 18, 2024

How to Run different JAVA scripts using different JDK versions on the same LG

Running different Java scripts using different JDK versions on the same Load Generators in LoadRunner can be achieved by configuring the Java Runtime Environment (JRE) for each script individually. Here's a step-by-step guide:

1. Launch LoadRunner Agent as a Process: Start by launching the LoadRunner Agent on the Load Generator machine as a process. This can typically be done from the LoadRunner installation directory or through the command line.

Launch LoadRunner Agent as a Process:

  • Syntax: start <path_to_loadrunner_agent_executable>
  • Example: start C:\LoadRunner\bin\magentproc.exe

2. Enable Terminal Services: Before proceeding, ensure that Terminal Services are enabled on the Load Generator machine. This setting allows multiple users to log in and run processes concurrently.

Enable Terminal Services:

  • Syntax: Navigate to Start -> Programs -> LoadRunner -> Advanced Settings -> Agent Configuration -> Enable Terminal Services

3. Set Up Different Terminal Sessions: Open a separate terminal session for each JDK environment required by the scripts. Log in as a different user for each session to ensure isolation.

Example: set JAVA_HOME=C:\Program Files\Java\jdk1.8.0_291

4. Configure JDK Environment: In each terminal session, set up the environment variables to point to the respective JDK version required by the associated script. This ensures that each session uses the correct JDK.

5. Launch LoadRunner Agent: In each terminal session, launch the LoadRunner Agent process. The Agent will inherit the JDK environment specified in the session's environment variables.

Launch LoadRunner Agent:

  • Syntax: start <path_to_loadrunner_agent_executable>
  • Example: start C:\LoadRunner\bin\magentproc.exe
6. Connect Controller to Each Session: In the LoadRunner Controller, connect to each terminal session on the Load Generator machine using the appropriate connection method (e.g., RDP). Each session will be identified as LG:0, LG:1, LG:2, and so on.

7. Set Sessions to "Ready" State: Once connected, ensure that each session is in a "Ready" state in the Controller. This indicates that the LoadRunner Agent process is running and ready to execute scripts.

8. Run Scripts: Finally, run each script in the corresponding session that has the respective JDK environment configured. The scripts will execute using the JDK specified for each session, allowing concurrent execution of scripts requiring different JDK versions.

By following these steps, you can effectively manage and execute multiple scripts with different JDK requirements concurrently on a single Load Generator using Terminal Services. This approach provides flexibility and isolation for running scripts with diverse JDK dependencies.

"Failed to read file" when loading parameter file in VuGen | Load runer parameter errors

The error "Failed to read file" in VuGen typically occurs when there is an issue with the parameter file you are trying to load. Here are several steps you can take to troubleshoot and resolve this issue:

1. Check File Path: Ensure that the parameter file you are trying to load exists in the specified location. Verify the file path and make sure there are no typos or errors in the path.

2. File Format: Confirm that the parameter file is in the correct format and is not corrupted. VuGen supports various parameter file formats such as CSV (comma-separated values), TXT (plain text), DAT (data), etc. Make sure the file is saved in a compatible format.

3. File Permissions: Check the file permissions to ensure that VuGen has the necessary read access to the parameter file. If the file is restricted or located in a protected directory, VuGen may encounter issues while trying to read it.

4. File Encoding: Verify the encoding of the parameter file. VuGen supports different character encodings, such as UTF-8, ANSI, Unicode, etc. Ensure that the encoding of the file matches the expected encoding in VuGen.

5. File Content: Review the content of the parameter file for any irregularities or unexpected characters. Remove any special characters or formatting that may be causing parsing errors.

6. VuGen Version Compatibility: Ensure that the parameter file is compatible with the version of VuGen you are using. Sometimes, parameter files created in newer versions may not be fully compatible with older versions of VuGen.

7. Try a Different File: If the issue persists, try loading a different parameter file to see if the problem is specific to the file you are currently using. This can help determine if the issue lies with the file itself or with VuGen.

8. Restart VuGen: Close VuGen and reopen it to see if the error persists. Sometimes, restarting the application can resolve temporary issues or glitches.

9. Update VuGen: If you are using an older version of VuGen, consider updating to the latest version. Newer versions may include bug fixes and improvements that address issues related to loading parameter files.




Saturday, February 17, 2024

What is Forward proxy and Reverse proxy? What are major differences between them?

Forward Proxy:
Imagine you're at school, and you want to borrow a book from the library. However, instead of going directly to the library yourself, you ask your friend who's already there to get the book for you. Your friend acts as a "forward proxy" for you. They go to the library on your behalf, borrow the book, and give it to you.

In technical terms, a forward proxy is like a middleman server that sits between you (the client) and the internet. When you request a web page or any other resource, the request first goes to the forward proxy server. The proxy server then forwards the request to the internet on your behalf, receives the response, and sends it back to you.


Reverse Proxy:
Now, imagine you're organizing a surprise birthday party for your friend at your house. You want to keep it a secret, so you ask another friend to stand outside your house and greet guests as they arrive. This friend is your "reverse proxy." They make sure that no one enters the house directly but instead directs them to you inside.

In technical terms, a reverse proxy is a server that sits between the internet and a web server. When someone tries to access a website, the request first goes to the reverse proxy server. The reverse proxy then decides which web server should handle the request, forwards the request to that server, receives the response, and sends it back to the requester.


Major Differences:

1. Direction of Communication:
- Forward proxy: Client communicates with the proxy server, which then communicates with the internet.
- Reverse proxy: Client communicates with the reverse proxy server, which then communicates with the web server.

2. Purpose:
- Forward proxy: Used to access the internet indirectly, often for security and privacy reasons.
- Reverse proxy: Used to improve performance, scalability, and security of web servers by distributing incoming requests and hiding server details.

3. Visibility:
- Forward proxy: Client knows they are using a proxy server.
- Reverse proxy: Client may not be aware that they are interacting with a reverse proxy; it appears as though they are communicating directly with the web server.

So, in simpler terms, a forward proxy acts like a friend who helps you access things on the internet indirectly, while a reverse proxy acts like a guard protecting your house from unwanted guests and directing them to the right place.

Friday, February 16, 2024

Communication between VuGen and the TruClient browser could not be established - Load Runner error Ajax Truclient protocol

Recently i have faced the error "Communication between VuGen and the TruClient browser could not be established" while working with Ajax Truclient protocol, i have resolved this by below methods

Operating system:- Windows 10 
Mozilla firefox version:- 121.0 
LR Version:- 12.5

1. Run LoadRunner as Administrator: Right-click on the LoadRunner/VuGen shortcut and select "Run as administrator". This ensures that LoadRunner has the necessary permissions to communicate with the TruClient browser.

2. Check Firewall and Antivirus Settings: Sometimes, firewall or antivirus software can block communication between applications. Make sure that LoadRunner/VuGen is added to the exceptions list in your firewall and antivirus settings.

3. Verify TruClient Browser Configuration: Ensure that your TruClient browser (Firefox) is properly configured and updated to the latest version. Check for any browser extensions or plugins that might interfere with the communication between VuGen and the browser.

4. Restart LoadRunner and Browser: Close all instances of LoadRunner/VuGen and the TruClient browser. Then, restart both applications and try again to see if the error persists.

5. Check User Permissions: Ensure that your Windows user account has the necessary permissions to access and modify files and settings related to LoadRunner/VuGen. You may need to contact your system administrator to verify and adjust your user permissions if necessary.

6. Update LoadRunner: Since you're using an older version of LoadRunner (12.5), consider updating to the latest version if possible. Newer versions may have bug fixes and improvements that could resolve compatibility issues with Windows 10 and Firefox.

7. Check LoadRunner/VuGen Logs: Look for any error messages or warnings in the LoadRunner/VuGen logs that might provide more information about the cause of the communication error. The logs can usually be found in the installation directory or in the Windows event viewer.

8. Contact LoadRunner Support: If you're unable to resolve the issue using the above steps, consider reaching out to LoadRunner support for further assistance. Provide them with detailed information about the error message and any troubleshooting steps you've already taken.

Enhancing Product Performance- The Art of Tuning and Optimization

In the world of software development, ensuring optimal product performance is paramount for user satisfaction and competitive advantage. This article explores the intricate process of tuning and optimization across various aspects of software systems to achieve peak performance levels.

1. Code Tuning:
Efficient code is the backbone of a high-performing software product. Utilizing code profilers, developers identify and optimize critical code segments to enhance execution speed. Addressing issues like memory leaks and CPU optimization leads to significant performance improvements. For example, optimizing loops and minimizing redundant function calls can drastically enhance responsiveness.

Sample Scenario: A popular e-commerce platform experiences sluggish response times during checkout, especially when handling a large number of concurrent users during peak shopping seasons.

Solution: Upon code analysis, developers identify that certain sections of the checkout process involve inefficient database queries and redundant data processing. By refactoring these code segments to streamline database interactions and optimize algorithmic complexity, the platform's responsiveness during checkout significantly improves, leading to enhanced user satisfaction and increased conversion rates.

2. Configuration Tuning:
Fine-tuning system configurations is essential for optimal performance. Adjusting settings such as thread pools and database connections ensures smooth operation under varying workloads. Optimizing these configurations to handle peak demands prevents downtime and maintains stability, providing uninterrupted service for users.

Sample Scenario: A cloud-based application faces intermittent downtime and performance degradation, particularly during high-traffic periods such as marketing campaigns or product launches.

Solution: After conducting performance diagnostics, developers realize that the application's server configurations, including thread pool sizes and connection limits, are insufficient to handle sudden spikes in traffic. By adjusting these configurations to dynamically allocate resources based on demand and implementing load balancing strategies, the application maintains stability and availability, ensuring uninterrupted service for users during peak demand periods.

3. Database Tuning:
Optimizing database performance involves refining SQL queries, indexing strategies, and connection pooling. By addressing inefficiencies in queries and implementing proper indexing, developers can expedite data retrieval processes. This ensures timely access to critical information, enhancing overall system responsiveness.

Sample Scenario: A healthcare management system experiences slow response times when retrieving patient records, affecting medical professionals' ability to provide timely care.

Solution: Database performance analysis reveals poorly optimized SQL queries and inadequate indexing, leading to inefficient data retrieval processes. By optimizing SQL queries, creating appropriate indexes, and implementing caching mechanisms, developers significantly reduce query execution times. This ensures prompt access to patient information, enabling healthcare providers to deliver expedited care without delays.

4. Infrastructure Tuning:
Benchmarking hardware and adjusting instance sizes are vital for resource optimization. Scaling infrastructure dynamically based on demand prevents performance degradation during peak usage periods. By fine-tuning resource allocation, developers ensure seamless scalability and reliable service for users.

Sample Scenario: A streaming service encounters buffering issues and playback interruptions during peak viewing hours, frustrating users.

Solution: Performance testing reveals that the service's infrastructure, including server instances and network bandwidth, is insufficient to handle the surge in concurrent viewers. By scaling up server instances, increasing network bandwidth, and implementing content delivery networks (CDNs) to cache and distribute content, developers ensure seamless streaming experiences for users, eliminating buffering delays and interruptions.

5. Architecture Tuning:
Evaluating system architecture reveals opportunities for performance enhancement. Integrating solutions like content delivery networks (CDNs) and queuing systems improves response times and scalability. Adopting architectures like microservices facilitates better resource allocation and fault isolation, enhancing overall system performance.

Sample Scenario: A global news website experiences slow page load times and high latency for users accessing content from different geographical regions.

Solution: Architecture assessment identifies that the website's centralized server infrastructure leads to increased latency for distant users. By implementing a distributed architecture with CDN integration, content caching, and edge computing capabilities, developers reduce latency by serving content from edge servers closer to end-users. This results in faster page load times and improved user experiences across geographical regions.

6. Network Tuning:
Optimizing network performance is crucial in distributed systems. Addressing DNS configurations and protocol optimizations reduces latency and enhances throughput. Implementing CDN solutions reduces latency by serving content closer to end-users, ensuring smooth data transmission even in geographically dispersed environments.

Sample Scenario: A collaborative workspace platform encounters network congestion and latency issues, hindering real-time collaboration and file sharing among remote teams.

Solution: Network diagnostics uncover inefficient network configurations and suboptimal routing, contributing to network congestion and latency. By optimizing DNS settings, implementing traffic prioritization, and utilizing quality-of-service (QoS) mechanisms, developers mitigate network congestion and reduce latency, ensuring smooth and responsive communication among distributed teams.

Tuning and optimizing software systems are essential for meeting performance expectations and delivering superior user experiences. By focusing on code, configuration, database, infrastructure, architecture, and network tuning, developers ensure their products operate at peak performance levels. Embracing a culture of continuous optimization is key to staying competitive in today's dynamic technology landscape.

Wednesday, February 14, 2024

Capture Server stats during the load test using python

Code Block with Copy Button
    
      # Python code to capture server stats during load test
      import psutil
      import time
      import matplotlib.pyplot as plt
      
      # Function to capture and plot server stats
      def capture_server_stats(duration, interval):
          timestamps = []
          cpu_percentages = []
          memory_percentages = []
      
          end_time = time.time() + duration
      
          while time.time() < end_time:
              # Capture CPU and memory usage
              cpu_percent = psutil.cpu_percent(interval=interval)
              memory_percent = psutil.virtual_memory().percent
      
              # Record timestamp and stats
              timestamps.append(time.time())
              cpu_percentages.append(cpu_percent)
              memory_percentages.append(memory_percent)
      
              # Print stats (optional)
              print(f"CPU Usage: {cpu_percent}% | Memory Usage: {memory_percent}%")
      
              # Sleep for the specified interval
              time.sleep(interval)
      
          # Plot the data
          plt.figure(figsize=(10, 5))
          plt.plot(timestamps, cpu_percentages, label='CPU Usage (%)', color='blue')
          plt.plot(timestamps, memory_percentages, label='Memory Usage (%)', color='green')
          plt.xlabel('Time (seconds)')
          plt.ylabel('Usage (%)')
          plt.title('Server Stats During Load Test')
          plt.legend()
          plt.grid(True)
          plt.show()
      
      # Example usage: Capture server stats for 60 seconds with an interval of 1 second
      capture_server_stats(duration=60, interval=1)
    
  

psutil library is used to capture CPU and memory usage statistics.

capture_server_stats function captures CPU and memory usage at regular intervals for the specified duration.

The captured data is stored in lists (timestamps, cpu_percentages, memory_percentages).

The data is then plotted using matplotlib.

Tuesday, February 13, 2024

Recording a Script in Silk Performer-Creating Test Script in Silk Performer

Silk Performer is a powerful performance testing tool that allows developers and testers to simulate real-world user interactions with web applications, APIs, and other software systems. One of the key features of Silk Performer is its ability to record user interactions and automatically generate test scripts based on those recordings. In this blog post, we'll explore the process of recording a script in Silk Performer and creating test scripts for performance testing purposes.


Step 1: Launch Silk Performer
First, launch the Silk Performer application on your computer. Ensure that you have the necessary permissions and access rights to record scripts and perform performance tests.

Step 2: Start Recording
Once Silk Performer is running, navigate to the "Recording" section or toolbar. Here, you'll find options to start recording a new script. Click on the "Start Recording" button to initiate the recording process.

Step 3: Configure Recording Settings
Before recording your script, you may need to configure some recording settings based on your testing requirements. For example, you can specify the target application URL, choose the browser type, and configure proxy settings if needed.

Step 4: Perform User Interactions
With the recording started, perform the user interactions that you want to include in your test script. This may involve navigating through web pages, filling out forms, clicking on links or buttons, and interacting with various elements of the application.

Step 5: Stop Recording
Once you've completed the desired user interactions, stop the recording process in Silk Performer. This will finalize the recording and generate a test script based on the actions you performed during the recording session.

Step 6: Review and Customize Script

After recording, Silk Performer will generate a test script based on the recorded interactions. Take some time to review the generated script and make any necessary customizations or adjustments. You can modify script parameters, add validation checks, or include additional logic as needed.

Step 7: Save Test Script
Once you're satisfied with the test script, save it in a suitable location on your computer. It's a good practice to organize your test scripts in a logical folder structure for easy access and management.

Step 8: Execute Test Script
With the test script saved, you can now execute it in Silk Performer to perform performance testing on the target application. Silk Performer provides options to run tests with different load profiles, monitor performance metrics, and analyze test results.

Unable to view Analysis summary page properly in Load Runner

Problem:when attempting to open the result in Analysis, users may encounter difficulties with the page not displaying correctly, often manifesting as a black side on the Analysis results file or page. This issue can be frustrating and hinder productivity.




Solution:
To rectify this problem, follow these detailed steps:

1. Right-click on the analysis icon, click properties.

2. Navigate to Compatibility: Within the Properties dialogue box, locate and click on the "Compatibility" tab. This tab contains settings that allow you to adjust how programs run on your system.

3. Disable Display Scaling on High DPI Settings:In the Compatibility tab, find the option labeled "Disable display scaling on high DPI settings." Check this option to enable it. This setting ensures that the display scaling does not interfere with the proper rendering of the Analysis summary page.

4. Apply Changes: After checking the "Disable display scaling on high DPI settings" option, click on the "Apply" button located at the bottom right corner of the Properties window. This action applies the changes you've made.

5. Confirm and Exit:Once you've applied the changes, click on the "OK" button to confirm and exit the Properties window.

6. Additional Steps for Certain Operating Systems: Depending on your operating system version, you may need to take further action. If you see an option labeled "Change high DPI settings," proceed as follows:

   - Click on "Change high DPI settings" to access the relevant settings.
   - Check the box labeled "Override high DPI scaling behavior" within the High DPI settings window.
   - Apply the changes by clicking on the "Apply" button, followed by "OK."

7. Check Screen Resolution Settings: It's essential to ensure that your screen resolution settings are optimised for proper display. Set the scale layout to 100% to avoid any scaling issues that may affect the Analysis summary page.

Sunday, February 11, 2024

How to reset the password for diagnostics server in LRE

1. Access Server Configuration Files: Locate the directory where the diagnostics server is installed. This could vary depending on the software you're using. Common locations include installation directories like "C:\Program Files\Diagnostics" or "/opt/diagnostics".

2. Find Password Configuration: Look for configuration files that store user credentials or password-related settings. These files might be named something like "config.ini", "settings.conf", or similar. Sometimes, password-related configurations are stored in a database.

3. Reset Password: Once you've located the password configuration file or settings, you'll need to find the entry for the admin user or the user whose password you want to reset. Depending on the format used in the configuration file, you may need to delete the existing password or replace it with a new one.

4. Save Changes: After making the necessary changes to reset the password, save the modifications to the configuration file.

5. Restart Diagnostics Server: If required, restart the diagnostics server software to apply the changes. This step might not be necessary for all configurations, but it's a good practice to ensure that the new password takes effect.

6. Verify Password Reset: Once the server is restarted (if necessary), try logging in with the newly reset password to ensure that it's working as expected.


If you're unsure about the specific steps or configuration files for your diagnostics server software, I'd recommend consulting the software's documentation or contacting the vendor's support for detailed instructions tailored to your setup.

Performance Testing Metrics - Client side and Server Side Metrics

Performance Testing Metrics

Performance Testing Metrics

Client-side Metrics in Performance Testing

Client-side metrics in performance testing focus on measuring various aspects of the client environment and user experience during the execution of a performance test. These metrics help assess the performance and responsiveness of the client-side components of an application.

Common Client-side Metrics:

  • Page Load Time: The time taken for web pages to load in the client's browser, including all page resources such as images, scripts, and stylesheets.
  • Render Time: The time taken by the browser to render the HTML content and display it on the screen.
  • Network Latency: The delay experienced by data packets as they travel between the client and the server over the network.
  • Client-side CPU and Memory Usage: The utilization of CPU and memory resources on the client device during the test.
  • Client-side Errors: The number of errors encountered by the client, such as JavaScript errors, rendering issues, or resource loading failures.
  • User Interactions: Metrics related to user interactions with the application, such as click events, form submissions, and navigation paths.
  • Client-side Cache Hits: The percentage of requests served from the client-side cache instead of fetching resources from the server.
  • Client-side API Calls: The number of API calls made by client-side scripts to fetch data or perform actions asynchronously.
  • Client-side Performance Events: Monitoring performance events triggered by the browser, such as DOMContentLoaded, load, and scroll events.
  • Client-side Resource Utilization: Tracking the usage of client-side resources, including local storage, session storage, and cookies.

Server-side Metrics in Performance Testing

Server-side metrics in performance testing focus on measuring various aspects of the server's performance and resource utilization during the execution of a performance test. These metrics help assess the server's ability to handle different loads and identify potential bottlenecks that may impact application performance.

Common Server-side Metrics:

  • Processor Usage: The percentage of CPU utilization by the server during the test.
  • Memory Utilization: The amount of physical memory consumed by the server's processes.
  • Disk I/O: The rate of read and write operations performed by the server's disk subsystem.
  • Network Throughput: The rate at which data is transferred over the network interface.
  • Server Response Time: The time taken by the server to process and respond to client requests.
  • Database Performance: Metrics related to database operations, including query execution time, transaction throughput, and database lock contention.
  • Server Errors: The number of errors encountered by the server during the test, such as HTTP errors, database connection errors, or server timeouts.
  • Connection Pooling: The efficiency of connection pooling mechanisms in managing database connections and minimizing connection overhead.
  • Server-side Caching: The effectiveness of caching mechanisms in reducing the server's workload by serving cached content instead of generating dynamic responses.
  • Thread Pool Utilization: The utilization of server-side thread pools for handling concurrent requests.
  • Server Log Analysis: Analysis of server logs to identify performance issues, error patterns, and abnormal behavior.
  • Service Availability: The percentage of time that server services are available and responsive to client requests.

Saturday, February 10, 2024

Troubleshooting LDAP Protocol Issues in LoadRunner 12.6

LDAP Security Workarounds

LDAP Security Workarounds

In LoadRunner version 12.6, when testing applications using the LDAP protocol, if your script is failing after adding a new layer of security to the LDAP server and importing the necessary certificates to your Windows machine via the MMC portal, there are a few potential workarounds you can try:

  1. Update LoadRunner's Certificate Store: LoadRunner uses its own certificate store, separate from the Windows certificate store. You may need to import the certificates directly into LoadRunner's certificate store. To do this, you can use the LoadRunner Certificate Utility (certUtil), which allows you to manage certificates used by LoadRunner protocols. The utility is typically located in the bin directory of your LoadRunner installation. You can run the utility from the command line and follow the prompts to import the necessary certificates.
  2. Specify Certificate Path in Script: If importing certificates into LoadRunner's certificate store doesn't resolve the issue, you can try specifying the path to the certificate directly in your script. In your VuGen script, you can use the ldap_option function to set the LDAP_OPT_X509_CERT option, providing the path to the certificate file. This approach allows you to explicitly specify which certificate to use for establishing the LDAP connection.
  3. Verify Certificate Compatibility: Ensure that the certificates you've imported are compatible with the LDAP server's security requirements. Some LDAP servers may require specific types of certificates or certificate formats. Double-check with your Dev team or LDAP server administrator to confirm that the certificates you've imported meet the server's expectations.
  4. Check Certificate Trust: Even if the certificates are imported correctly, the LDAP server may not trust them if they're not issued by a trusted Certificate Authority (CA). Verify that the certificates are issued by a trusted CA and that the LDAP server trusts certificates from that CA. You may need to import the CA's root certificate into LoadRunner's certificate store or specify it in your script.
  5. Debug LDAP Traffic: Enable verbose logging or debug mode in your LoadRunner script to capture detailed information about the LDAP traffic. This can help identify any specific errors or issues encountered during the SSL/TLS handshake process. Analyzing the debug logs can provide insights into why the LDAP connection is failing despite importing the certificates.

By trying these workarounds and troubleshooting steps, you can hopefully resolve the issue with connecting to the LDAP server in LoadRunner version 12.6 despite the new security layer. If the problem persists, consider reaching out to Micro Focus support for further assistance.

Generative AI in performance Testing

Generative AI can be applied in performance testing in various ways to simulate realistic user behavior, generate test data, and automate test case creation. Here are some specific areas where generative AI can be utilized in performance testing:

1. Load Testing Scenarios Generation: Generative AI algorithms can analyze production traffic patterns and automatically generate realistic load testing scenarios. This helps in simulating various user behaviors, such as concurrent user logins, transactions, searches, and other interactions, to assess system performance under different conditions.

2. Test Data Generation: Generative AI can create synthetic test data that closely resembles real-world data, including user profiles, transactions, and content. This enables performance testers to conduct tests with a diverse set of data without relying solely on production data, thus ensuring data privacy and security.

3. Scriptless Test Automation: Generative AI can assist in automating the creation of performance test scripts without the need for manual scripting. By analyzing application interfaces and user interactions, AI algorithms can generate test scripts automatically, reducing the time and effort required for test script development.

4. Dynamic Test Case Generation: Generative AI algorithms can dynamically generate test cases based on evolving system requirements, user behaviors, and performance metrics. This ensures that performance tests remain relevant and effective in identifying performance bottlenecks and scalability issues throughout the software development lifecycle.

5. Anomaly Detection and Root Cause Analysis: Generative AI can analyze performance metrics and system logs to detect anomalies and identify potential root causes of performance issues. By leveraging machine learning algorithms, AI-powered tools can learn normal system behavior and flag deviations that may indicate performance degradation or failures.

6. Predictive Performance Analytics: Generative AI can predict future performance trends and capacity requirements based on historical data and current system metrics. This helps organizations proactively optimize system resources, scale infrastructure, and mitigate performance risks before they impact end users.


Thursday, February 8, 2024

Ignore HTTP errors in LoadRunner - Ignore errors in vugen script

When a Virtual User (Vuser) encounters an HTTP error during script execution, it typically results in the termination of the script and is reported as a failure. This behavior can be problematic, especially in scenarios where encountering certain HTTP errors is expected behavior or where the primary focus is on other aspects of the test.

LoadRunner provides a straightforward mechanism to configure HTTP errors as warnings, thereby allowing scripts to continue execution even when encountering such errors. This is achieved using the web_set_rts_key function to set the HttpErrorsAsWarnings runtime setting key to a value of 1. By enabling this setting, HTTP errors will be marked as warnings instead of causing the script to fail.


Here is the Syntax:

web_set_rts_key("key=HttpErrorsAsWarnings", "Value=1", LAST);
  • The web_set_rts_key function is used to set a runtime setting key. "key=HttpErrorsAsWarnings" indicates the setting being configured is for marking HTTP errors as warnings.
  •  "Value=1" sets the value to 1, which means HTTP errors will be marked as warnings instead of causing the script to fail.
  •  LAST indicates that this is the last parameter being set for this function call.
Consider a scenario where a LoadRunner script needs to access a URL that occasionally returns a 404 HTTP status code. Instead of failing the script when encountering this error, we can configure it to treat it as a warning.

Example:

web_set_rts_key("key=HttpErrorsAsWarnings", "Value=1", LAST);

web_url("jquery.unobRE-ajax.js", 
        "URL=https://easyloadrunner.blogspot.com/", 
        "Resource=1", 
        "RecContentType=application/javascript", 
        "Referer=http://easyloadrunner.com/", 
        "Snapshot=t2.inf", 
        LAST);
web_set_rts_key("key=HttpErrorsAsWarnings", "Value=0", LAST); 

In addition to treating all HTTP errors as warnings, LoadRunner allows for more granular control by specifying block lists or allow lists for specific HTTP error codes. This enables testers to customize the behavior based on project requirements. For instance, you may want to treat certain error codes as warnings while still failing the script on others.

Example:

// Block List Example
web_set_rts_key("key=HttpErrorsAsWarnings", "Value=1", LAST);
web_set_rts_key("key=BlockList", "Value=404,400,501", LAST);


// Allow List Example
web_set_rts_key("key=HttpErrorsAsWarnings", "Value=1", LAST);
web_set_rts_key("key=AllowList", "Value=1**,3**,401", LAST);

Pacing Calculator | Pacing Calculator Loadrunner

Pacing Calculator

Pacing Calculator

Additional Information:

Since the residual time (R) is negative, it indicates that the test window is insufficient to complete all iterations within the provided duration. This suggests that the iterations cannot be paced evenly within the given time frame. Therefore, in this scenario, the pacing calculation may not yield a meaningful result.

However, if you want to interpret the result, it would imply that each iteration should be executed with a pacing interval of approximately -122.928 seconds, which is not practically feasible. It's important to adjust the test duration or the number of iterations to ensure that the test can be completed within a reasonable time frame.

Wednesday, February 7, 2024

Six Sigma White Belt Certification for free 0$


Are you looking to enhance your skills and credentials in the field of Six Sigma? Look no further! The Council for Six Sigma Certification (CSSC) offers a comprehensive White Belt Certification program designed to validate your understanding of the fundamental principles of Six Sigma methodology. In this blog post, we'll explore the certification process, its benefits, and how you can get started on your journey towards becoming a certified Six Sigma professional.


Why Choose CSSC for Your Six Sigma White Belt Certification?
CSSC is the official industry standard for Six Sigma accreditation, providing individuals with a reputable and recognized certification. Our White Belt Certification program is ideal for beginners or professionals looking to refresh their knowledge of Six Sigma basics.


Certification Paths
CSSC offers two different paths to earning your Six Sigma White Belt Certification, allowing you to choose the option that best fits your learning style and preferences:

1. Standard Exam Path: This path is suitable for individuals who are already familiar with the "Body of Knowledge" covered in the White Belt Certification. The standard exam consists of 30 questions and is open-book format, with a duration of 30 minutes. Best suited for those seeking a single level of certification and comfortable completing their exam within the allotted time.

2. Self-Paced Exam Path: For candidates who prefer a more flexible approach, the self-paced exam option allows you to progress through several short exams as you cover different sections of the curriculum. This option is ideal for those utilizing free self-study guides provided by CSSC and looking to earn various levels of "Belts" without incurring additional examination fees.

Certification Requirements:
To obtain your CSSC Certified Six Sigma White Belt (CSSC-CSSWB) designation, you must successfully complete the certification exam and achieve a minimum score of 56 points out of 80. There are no prerequisites for taking the exam, and no project requirement for this level of certification.

Benefits of Certification:
Upon meeting the certification requirements, you'll receive an official CSSC Six Sigma White Belt Certification, recognized globally in the industry. Our certifications have no expiration date, allowing you to showcase your expertise indefinitely. Plus, you'll be added to the Council for Six Sigma Certification Official Register, providing verifiable proof of your certification.

Getting Started:
Ready to embark on your Six Sigma certification journey? Register for the Standard Exam or explore the Self-Paced Exam option on the Council for Six Sigma Certification website. Don't forget to review the free self-study guides and the corresponding "Body of Knowledge" to prepare for the exam effectively.

Conclusion:
Earning your Six Sigma White Belt Certification with CSSC is a valuable investment in your professional development. Whether you're new to Six Sigma or seeking to advance your career, our certification program offers the knowledge and recognition you need to succeed in the industry.


Take the first step towards becoming a certified Six Sigma professional today with CSSC!

Register for the Standard Exam

Tracking File Processing Time on a Remote Server with JMeter | Remote machine file processing times by JMeter

In many applications, files are processed on remote servers as part of their workflow. Monitoring the time it takes for files to be processed is crucial for performance evaluation and optimisation. In this blog post, we'll explore how to track the processing time of files on a remote server using Apache JMeter.


Prerequisites:

Before getting started, make sure you have the following:
- Apache JMeter installed
- SSH Protocol Support plugin installed in JMeter

Step 1: Setting Up the Test Plan
  • Adding SSH Command Samplers: We'll use the SSH Command Sampler in JMeter to execute commands on the remote server. Add SSH Command Samplers to the test plan for capturing start time, executing the processing command, and capturing end time.
  • Configuring SSH Connection: Configure the SSH connection details such as hostname, port, username, and password or SSH key.
  • Defining Processing Command: Define the command to process the file and move it to another folder on the remote server within the SSH Command Sampler.

Step 2: Capturing Start Time

Pre-processing:Before executing the processing command, capture the start time using a pre-processor or by executing a command to get the current timestamp.

Step 3: Executing Processing Command

Executing Command:  Execute the command to process the file and move it to another folder on the remote server using the SSH Command Sampler.

Step 4: Capturing End Time

Post-processing:   After executing the processing command, capture the end time using a post-processor or by executing a command to get the current timestamp.

Step 5: Calculating Processing Time

   - Calculate the processing time by subtracting the start time from the end time.

Here's how you can modify the JMeter test plan:

Test Plan
  Thread Group
    SSH Command Sampler (Pre-processor)
      Command: date +%s > /path/to/start_time.txt
    SSH Command Sampler
      Server Name: your_server_hostname
      Port: your_ssh_port
      Username: your_ssh_username
      Password: your_ssh_password_or_key
      Command: cd /path/to/source_folder && your_processing_command && mv processed_file /path/to/destination_folder
    SSH Command Sampler (Post-processor)
      Command: date +%s > /path/to/end_time.txt
    View Results Tree (or other listeners)


Replace placeholders like `your_server_hostname`, `your_ssh_port`, `your_ssh_username`, `your_ssh_password_or_key`, `/path/to/source_folder`, `your_processing_command`, and `/path/to/destination_folder` with your actual server details and commands.

After running the test plan, you can read the start time and end time from the files `/path/to/start_time.txt` and `/path/to/end_time.txt`, respectively, and calculate the processing time. You can use the `SSH Command Sampler` to read the contents of these files and extract the timestamps. Then, use a post-processor or a custom script to calculate the processing time.

Happy Testing !

Monday, February 5, 2024

How to pass token value from one thread group to another thread group in Jmeter?

In JMeter, passing variables between Thread Groups directly is not supported due to their independent execution. However, you can achieve this by using JMeter properties or variables.
One way is to use the __setProperty function to set a property in one thread group and then use __P function to retrieve it in another thread group.

Example:

1. In Thread Group 1, use a BeanShell Sampler or JSR223 Sampler with the following script to set a property:

```java
props.put("myToken", "yourTokenValue");
```
Replace "yourTokenValue" with the actual token value.

2. In Thread Group 2, use a BeanShell Sampler or JSR223 Sampler with the following script to retrieve the property:

```java
String tokenValue = props.get("myToken");
```

Now, `tokenValue` will contain the token value you set in Thread Group 1.


Remember to set the language of the script accordingly (e.g., choose Beanshell or Groovy) based on your preference.Please ensure proper synchronization to avoid potential race conditions.

Thursday, February 1, 2024

Database Performance Testing - Non functional Requirements gathering (NFR gathering)

When conducting Non-Functional Requirements (NFR) checks for database performance testing, consider the following questions:

1. Response Time: What is the expected response time for different types of queries and transactions?

2. Throughput: How many transactions or queries should the database handle per unit of time?

3. Concurrency: What level of concurrent users or transactions should the database support without degradation in performance?

4. Scalability: How well does the database scale with an increase in data volume, users, or transactions?

5. Resource Utilization: What are the acceptable levels of CPU, memory, and disk utilization for the database under different load scenarios?

6. Isolation Levels: How are isolation levels configured, and do they meet the application's requirements for consistency and performance?

7. Indexing Strategy: Is the indexing strategy optimized to improve query performance?

8. Data Partitioning: Are there effective data partitioning strategies in place to distribute data evenly and enhance performance?

9. Caching Mechanisms: Are caching mechanisms employed, and do they align with the performance requirements?

10. Backup and Recovery: How does the database perform during backup and recovery operations, and are they within acceptable time frames?

11. Data Archiving: Is there a strategy for archiving historical data, and how does it impact database performance?

12. Query Optimization: Have queries been optimized, and are there mechanisms in place to identify and address poorly performing queries?

13. Network Latency: How does network latency affect database performance, and what measures are in place to mitigate its impact?

14. Security Measures: How do security features, such as encryption and access controls, impact database performance?

15. Failover and Redundancy: How quickly can the database failover in case of a primary node failure, and what impact does this have on performance?

16. Data Purging and Retention: Is there a strategy for data purging, and how does it affect overall database performance?

17. Compliance Requirements: Does the database meet any regulatory or compliance requirements affecting performance?

18. Benchmarking: Have you benchmarked the database performance under realistic conditions to identify potential bottlenecks?

By addressing these questions, you can establish a comprehensive set of Non-Functional Requirements for your database performance testing.