Saturday 27 April 2024
How to Save the downloaded file to local system in load runner
Python Script for Dynamically Generating LoadRunner Transaction Names | Automate Transaction Names in LoadRunner
BatchJob Performance Testing | Batch Job Performance Testing with JMeter
- Throughput: Number of tasks or commands completed in a specified time period.
- Latency: Time taken for a task or command to be submitted and completed.
- Utilization: Percentage of available resources (e.g. CPU, memory, disk, network) used during the batch process.
- Error Rate: Percentage of tasks or commands that fail or produce incorrect results.
- Additional Metrics: May include cost, reliability, scalability, or security, depending on your objectives.
- Logging: Record events and messages generated by the batch process in a file or database.
- Monitoring: Observe and measure performance and status of the batch process and resources used in real time or periodically.
- Profiling: Identify and measure time and resources consumed by each part or function of the batch process.
- Auditing: Verify and validate output and results of the batch process against expected standards and criteria.
- Parallelization: Divide the batch process into smaller, independent tasks or commands to run simultaneously on multiple processors or machines.
- Scheduling: Determine optimal time and frequency to run the batch process based on demand, availability, priority, or dependency of tasks or commands.
- Tuning: Adjust parameters, settings, or options of the batch process, resources, or environment to enhance performance and quality.
- Testing: Evaluate and compare performance and results before and after applying optimization techniques to ensure they meet expectations and requirements.
Thursday 25 April 2024
How to execute python script in JMeter? | Python script with Apache JMeter
You can execute Python scripts in JMeter by using the JSR223 Sampler, which supports scripting languages such as Groovy, JavaScript, and Python. The JSR223 Sampler is part of JMeter's core functionality and can be used to execute scripts within your JMeter test plan. Here's how you can execute a Python script in JMeter using the JSR223 Sampler:
1. Add a Thread Group: First, add a Thread Group to your test plan by right-clicking on the test plan and selecting *Add > Threads (Users) > Thread Group*. Configure the Thread Group settings as per your requirements.
2. Add a JSR223 Sampler: Within the Thread Group, add a JSR223 Sampler by right-clicking on the Thread Group and selecting *Add > Sampler > JSR223 Sampler*.
3. Configure the JSR223 Sampler:
- Name: Give the sampler a descriptive name.
- Language: Select "python" from the language dropdown.
- Script: In the script box, enter the Python code you want to execute. You can either write your script directly in the script box or reference an external Python file.
4. Use External Python Files (Optional):
- If you want to use an external Python file, you can import the file in the script box using the `import` statement.
- Ensure the Python file is accessible from the JMeter script. You may need to adjust your classpath to include the directory where the file is located.
5. Add Listeners (Optional): You can add listeners like View Results Tree or Summary Report to view the output of the sampler.
6. Run the Test Plan: Execute the test plan by clicking on the green play button at the top of the JMeter interface. The Python script will be executed as part of the test plan.
Here is an example of how you might use the JSR223 Sampler to execute a Python script:
This script prints a message and performs a simple calculation. You can expand on this script to include more complex operations or integrate it with other parts of your JMeter test plan.
- Add a Thread Group: First, add a Thread Group to your test plan by right-clicking on the test plan and selecting Add > Threads (Users) > Thread Group. Configure the Thread Group settings according to your needs.
- Add an OS Process Sampler: Within the Thread Group, add an OS Process Sampler by right-clicking on the Thread Group and selecting Add > Sampler > OS Process Sampler.
- Configure the OS Process Sampler:
- Name: Give the sampler a descriptive name.
- Command: Specify the command to execute the Python script. You will need to provide the path to the Python executable followed by the path to your Python script.
- Arguments: If your Python script requires any command-line arguments, provide them here.
- Working Directory: Set the working directory (optional), which is the directory where the script will execute.
- For example, if you want to run a Python script called my_script.py located at /path/to/script/ using the Python interpreter located at /usr/bin/python3, your configuration would look like this:
- Command: /usr/bin/python3
- Arguments: /path/to/script/my_script.py
- Working Directory: /path/to/script/ (optional)
- Add Listeners (Optional): You can add listeners such as View Results Tree or Summary Report to view the output of the OS Process Sampler.
- Run the Test Plan: Execute the test plan by clicking on the green play button at the top of the JMeter interface. The Python script will be executed as part of the test plan.
Token Management in JMeter for Repeated Requests
Monday 8 April 2024
Export the comments for the transaction controller in JMeter to a CSV file
Wednesday 27 March 2024
Integrating Tosca with Neoload - Step-by-Step Guide
Wednesday 20 March 2024
JMeter Integration with Jenkins - CICD | Jmeter Continuous Integration With Jenkins
Performance Plugin - Installation:
JMeter Installation:
To execute JMeter from Jenkins and integrate the test results, follow these steps:
Monday 11 March 2024
Performance Testing Job description and Performance tester Roles and Responsibilities
- Performance Testing or Performance Tuning
- Tools: NeoLoad, JProfiler, JMeter, YourKit, LoadRunner, Load Balancer, Gatling
- Monitoring Tools: Splunk, Dynatrace, APM
- Databases: Oracle, SQL
- Robotics RPA
- AI/ML
- Gen AI
- Programming: Java, C
- Cloud: AWS, GCP, Azure
- Wireshark: A popular network protocol analyzer used for capturing and analyzing network traffic in real-time. It helps in diagnosing network issues and understanding communication patterns between systems.
- Fiddler: An HTTP debugging proxy tool that captures HTTP traffic between a computer and the internet. It allows performance testers to inspect and modify HTTP requests and responses, analyze network traffic, and diagnose performance issues.
- iperf: A command-line tool used for measuring network bandwidth, throughput, and latency. It can generate TCP and UDP traffic to assess network performance between servers or devices.
- Netcat (nc): A versatile networking utility that can read and write data across network connections using TCP or UDP protocols. It's commonly used for port scanning, network troubleshooting, and performance testing.
- Ping and Traceroute: Built-in command-line utilities available in most operating systems for testing network connectivity and diagnosing network-related issues. Ping measures round-trip time and packet loss to a destination host, while Traceroute identifies the path packets take to reach the destination.
- Nmap: A powerful network scanning tool used for discovering hosts and services on a network, identifying open ports, and detecting potential security vulnerabilities. It's useful for assessing network performance and identifying potential bottlenecks.
- TCPdump: A command-line packet analyzer that captures and displays TCP/IP packets transmitted or received over a network interface. It's commonly used for troubleshooting network issues and analyzing network traffic patterns during performance testing.
- Introduction to Performance Testing
- Understanding Performance/Nonfunctional Requirements
- Building High-Performing and Resilient Products
- Designing Workload Models and Performance Scenarios
- Defining KPIs and SLAs
- Profiling and Optimization Techniques
- Influencing Architectural Decisions for Performance
- Benchmarking and Sizing
- End-to-End Load Testing and Resiliency Testing
- Creating and Maintaining Performance Test Scripts
- Generating Performance Reports and Analytics
- Mitigating Performance Issues
- Strategies for In-Sprint Performance Testing
- Qualifications and Skills Required
- Additional Responsibilities and Skills
- Conclusion: Elevating Performance Testing in Your Organisation
- Familiarity with CI/CD pipelines and integration of performance tests into the development process.
- Knowledge of microservices architecture and performance testing strategies specific to it.
- Experience with containerization technologies such as Docker and orchestration tools like Kubernetes for performance testing in containerized environments.
- Proficiency in scripting languages like Python for automation and customization of performance tests.
- Understanding of web technologies such as HTML, CSS, JavaScript, and frameworks like React or Angular for frontend performance testing.
- Experience with backend technologies such as Node.js, .NET, or PHP for backend performance testing.
- Knowledge of API testing and performance testing of RESTful APIs using tools like Postman or SoapUI.
- Familiarity with database performance testing tools such as SQL Profiler, MySQL Performance Tuning Advisor, or PostgreSQL EXPLAIN.
- Experience with visualization tools like Grafana, Kibana, or Tableau for analyzing performance metrics and trends.
- Understanding of security testing concepts and tools for assessing application security during performance testing.
- Knowledge of networking fundamentals, protocols (HTTP, TCP/IP), and network performance testing tools like Wireshark or Fiddler.
- Experience with mobile performance testing tools and platforms for assessing the performance of mobile applications on different devices and networks.
- Proficiency in version control systems like Git for managing test scripts and configurations.
- Apache JMeter: While primarily known as a client-side performance testing tool, JMeter can also be used to simulate server-side behavior by creating HTTP requests to backend services and APIs. It's versatile and can handle various server-side protocols like HTTP, HTTPS, SOAP, REST, JDBC, LDAP, JMS, and more.
- Apache Bench (ab): A command-line tool for benchmarking web servers by generating a high volume of HTTP requests to a target server. It's useful for measuring server performance under load and stress conditions, assessing response times, throughput, and concurrency levels.
- Siege: Another command-line benchmarking tool similar to Apache Bench, Siege allows testers to stress test web servers and measure their performance under different load scenarios. It supports HTTP and HTTPS protocols and provides detailed performance metrics and reports.
- wrk: A modern HTTP benchmarking tool designed for measuring web server performance and scalability. It supports multithreading and asynchronous requests, making it efficient for generating high concurrency loads and assessing server performance under heavy traffic.
- Gatling: While primarily a client-side load testing tool, Gatling can also be used for server-side performance testing by simulating backend services and APIs. It's highly scalable, supports scripting in Scala, and provides detailed performance metrics and real-time reporting.
- Apache HTTP Server (Apache): An open-source web server widely used for hosting websites and web applications. Performance testers can analyze Apache server logs, monitor server resource usage, and optimize server configuration settings for better performance.
- NGINX: Another popular open-source web server and reverse proxy server known for its high performance, scalability, and efficiency in handling concurrent connections and serving static and dynamic content. Performance testers can assess NGINX server performance using access logs, monitoring tools, and performance tuning techniques.
- Tomcat: An open-source Java servlet container used for deploying Java-based web applications. Performance testers can analyze Tomcat server logs, monitor JVM metrics, and optimize server settings to improve application performance and scalability.
- Node.js: A runtime environment for executing JavaScript code on the server-side, commonly used for building scalable and high-performance web applications. Performance testers can analyze Node.js application logs, monitor event loop performance, and optimize code for better throughput and response times.
- WebPageTest: A free online tool that performs website performance testing from multiple locations around the world using real browsers (such as Chrome, Firefox, and Internet Explorer). It provides detailed performance metrics, waterfall charts, and filmstrip views to analyze page load times, resource loading behavior, and rendering performance.
- Google Lighthouse: An open-source tool for auditing and improving the quality and performance of web pages. It provides performance scores based on metrics like First Contentful Paint (FCP), Largest Contentful Paint (LCP), Total Blocking Time (TBT), and Cumulative Layout Shift (CLS). Lighthouse also offers recommendations for optimizing performance, accessibility, SEO, and best practices.
- GTmetrix: A web-based performance testing tool that analyzes website speed and provides actionable recommendations for optimization. It measures key performance indicators like page load time, total page size, and the number of requests. GTmetrix also offers features for monitoring performance trends over time and comparing performance against competitors.
- YSlow: A browser extension developed by Yahoo! that analyzes web pages and suggests ways to improve their performance based on Yahoo's performance rules. It provides grades (A to F) for various performance metrics and recommendations for optimizing factors like caching, minification, and image optimization.
- PageSpeed Insights: A tool by Google that analyzes the performance of web pages on both mobile and desktop devices. It provides performance scores and specific recommendations for improving page speed, such as optimizing images, leveraging browser caching, and minimizing render-blocking resources.
- Chrome DevTools: A set of web development and debugging tools built into the Google Chrome browser. DevTools includes features like Network panel for monitoring network activity, Performance panel for analyzing page load performance, and Lighthouse integration for auditing web page quality and performance.
- Selenium WebDriver: A popular automation tool for testing web applications by simulating user interactions in web browsers. Performance testers can use Selenium to automate repetitive tasks, measure page load times, and assess the responsiveness of web pages under different load conditions.
- Puppeteer: A Node.js library for controlling headless Chrome and Chromium browsers. It allows performance testers to automate browser tasks, capture performance metrics, and generate screenshots or videos of web page interactions. Puppeteer is useful for testing Single Page Applications (SPAs) and performing scripted user journeys.
- Web Vitals: A set of essential performance metrics introduced by Google to measure and improve the user experience of web pages. Web Vitals include metrics like Largest Contentful Paint (LCP), First Input Delay (FID), and Cumulative Layout Shift (CLS), which focus on loading performance, interactivity, and visual stability.
- Handson experience in LoadRunner/Performance Center (PC) is a MUST. Experience with opensource tools like JMeter, Gatling, etc., is a plus.
- Handson experience with monitoring tools such as Dynatrace, Splunk, AppDynamics, etc.
- Experience in Performance Bottleneck Analysis, especially using Splunk.
- Strong experience in creating performance test strategy, design, planning, workload modeling, and eliciting nonfunctional requirements for testing.
- Handson experience in Cloud technologies like AWS, GCP, and Azure.
- Proficiency in profiling and performance monitoring.
- Experience with microservices architecture, Docker, Kubernetes, Jenkins, Jira, and Confluence is a must.
- Good experience in providing Performance Tuning Recommendations.
- Handson experience in at least one ObjectOriented Programming Language (Java, Python, C++, etc.) is beneficial.
- Expertise in Performance Test Management and Estimations.
- Handson experience in shiftleft Agile testing.
- Experience in the Airline domain is advantageous.
- Prior CustomerFacing Experience is required.
- Experience in Mobile Performance Testing is beneficial.
- Ability to recommend test methodologies, approaches for testing, and improving application performance.
- Experience with SQL and application performance tools.
- Good Communication and interpersonal skills are required.
- Technical Performance Tester with 5+ years of experience.
- Excellent written and verbal communication skills.
- Strong consulting and stakeholder management skills.
- Expertise in LoadRunner suite (Professional/Cloud/Enterprise) and opensource tools for load testing, performance testing, stress testing.
- Ability to explain the value and processes of performance testing to projects.
- Proficiency in providing expert performance testing advice, guidance, and suggestions.
- Experience in application and database performance tuning/engineering.
- Knowledge of performance testing with CI/CD.
- Experience in performance testing with cloud infrastructure.
- Expert knowledge of various performance testing tools like Neoload, JProfiler, JMeter, YourKit, and Loadrunner.
- Expertise in Web Browser Profiling and Web Browser Rendering Optimizations.
- Experience in EndtoEnd Performance Engineering across all tiers.
- Experience with Backend technologies like Oracle, SQL.
- Experience analyzing and interpreting large volumes of data using tools like Splunk, APM, Dynatrace.
- Additional skills: Expertise in benchmark testing, performance analysis, tuning, experience in building 3tier applications using modern technology, troubleshooting, and resolution of performance issues on production.
- Strong analytical and problemsolving skills, creative thinking.
- Enthusiasm, influence, and innovation.
- Ability to lead technical accomplishments and collaborative project team efforts.
- Customer focus and quality mindset.
- Ability to work in an agile environment, in crossfunctional teams, delivering useroriented products in a fastpaced environment.
- Experience conducting performance testing for applications deployed on cloud platforms.
- Understanding of performance testing strategies specific to microservices architecture.
- Familiarity with integrating performance testing into CI/CD pipelines for automated and continuous testing.
- 48 years of experience in performance testing with a focus on LoadRunner.
- Strong expertise in SNOW (ServiceNow) testing, including ATF (Automated Test Framework).
- Experience in designing, implementing, and maintaining automated performance tests.
- Knowledge of CI/CD pipelines and integration of performance tests into the development process.
- Strong analytical and problemsolving skills.
- Excellent communication and collaboration skills.
- Proven ability to work effectively in a team environment.
- 710 years of experience in automation testing.
- Passionate about QA and breaking software systems.
- Good knowledge of Software Testing concepts and methodology.
- Good Understanding of Web fundamentals (MVC, HTML, JavaScript, CSS, ServerSide Programming, and Database).
- Good programming skills (Java/Ruby/Python/Node.js/JavaScript).
- Good knowledge of ObjectOriented Programming concepts and Design Patterns.
- Handson Knowledge of Test Automation approaches, Frameworks, and Tools.
- Handson experience in UI, API, Crossbrowser Testing Tools, and Automation.
- Handson knowledge of Continuous Integration & Continuous Deployment and Continuous integration tools (Jenkins/Travis/Teamcity).
- Good Analytical and ProblemSolving Skills.
- Familiarity with other performance testing tools and technologies.
- Experience in performance testing of cloudbased applications and services.
- Experience with other monitoring and visualization tools for performance testing.
Roles and Responsibilities:
- Understand performance/nonfunctional requirements closely with Engineering teams, focusing on application and infrastructure areas.
- Assist in building highperforming and resilient products by utilizing instrumentation, automation frameworks, and software tools for complex simulations in an agile, fastpaced environment.
- Design workload models, define performance scenarios and test conditions, and establish KPIs and SLAs for the application under test.
- Use profiling tools to identify hotspots for resource usage and develop optimizations to enhance performance and scalability.
- Influence architectural decisions to deliver highperforming software, identify performance bottlenecks, and make optimization recommendations.
- Take responsibility for benchmarking and sizing, participating in feature and system design throughout the software development lifecycle to ensure high performance.
- Perform endtoend load testing and resiliency testing.
- Create and maintain performance test scripts using tools like Neoload, LoadRunner, Jprofiler, JMeter, and others.
- Generate performance status reports and analytics trends for current releases and production.
- Partner with key stakeholders to mitigate performance issues and ensure timely delivery of projects.
- Develop strategies with delivery teams for insprint performance testing and engineering.Lead and execute performance testing initiatives across web, backend, mobile platforms, databases, and ETL processes.
- Design, develop, execute, and maintain comprehensive performance test scripts using JMeter.
- Collaborate closely with crossfunctional teams to analyze system requirements and translate them into effective performance test strategies.
- Conduct load and stress tests to identify and address performance bottlenecks.
- Analyze performance metrics and collaborate with teams to pinpoint performance issues and areas for improvement.
- Evaluate application scalability and recommend enhancements.
- Work closely with development and operations teams to implement performance enhancements.
- Maintain comprehensive documentation of performance test plans, scripts, results, and recommendations.
- Bachelor's degree in computer science, engineering, or a related field.
- A minimum of 10 years of experience in performance testing and quality assurance.
- Strong analytical and problem solving skills.
- Excellent communication and collaboration skills.
- Proactive and results driven Performance Testing Lead with a strong background in performance testing and a keen interest in the banking domain.
- Proven expertise in performance testing tools like LoadRunner and JMeter.
- Proficiency in monitoring tools including App Dynamics, Grafana, and Kibana.
- Preferably, experience in the banking domain with a strong understanding of its unique performance testing requirements.
- Hands on experience in microservices and API performance testing.
- Proficiency in Java scripting (preferred).
- Strong analytical and problemsolving skills.
- Excellent communication skills.
- Detailoriented and organized.
- 48 years of experience in performance testing and engineering.
- Expertise in JMeter (primary) and other performance tools like NeoLoad, BlazeMeter, Gatling, LoadRunner, etc.
- Ability to understand requirement gathering and create performance test strategies and plans.
- Indepth understanding of web technologies, backend systems, mobile application development, relational databases, and ETL processes.
- Proficiency in scripting and programming languages relevant to performance testing (e.g., Java, JavaScript, Python, SQL).
- Proficiency in HTTP/web protocol script development.
- Experience with E2E page rendering performance testing using performance tools.
- Ability to write custom requests and codes in JMeter or other performance tools.
- Knowledge of API and Database Performance testing.
- Familiarity with capturing performance metrics using Grafana and Dynatrace.
- Clear understanding of Work Load Modeling.
- Experience in capturing heap dumps and thread dumps.
- Knowledge of Performance Baseline and Benchmarking, along with different types of performance testing.
- Proactive approach towards the job at hand, with the ability to predict future requirements and prepare accordingly.
- Excellent communication skills in English, both verbal and written.
- Strong customer relationshipbuilding skills.
- Detailoriented with strong organizational and time management skills.
- Proficiency in agile project management and collaboration tools.
Friday 8 March 2024
Running Python Scripts with BigTable Queries in JMeter: A Step-by-Step Guide
Thursday 7 March 2024
How to fetch the file from azure blob storage using API
Sunday 3 March 2024
LoadRunner Block Size for Parameterization | Addressing Allocation Challenges and Failures in load runner scripting
Imagine you're organizing a party where you want to distribute candies equally among your friends. Initially, you have a large bag of candies representing all the unique values available for the parameter in your LoadRunner script. Each friend (virtual user) needs a certain number of candies (block size) to perform their actions during the party (test execution).
Here's how the Dynamic Allocation Algorithm works:
- Calculating Ideal Distribution: You calculate how many candies each friend should ideally get based on the total number of candies in the bag and the number of friends. This ensures a fair distribution.
- Adjusting for Optimal Sharing: Sometimes, the ideal distribution might result in too many candies for each friend, which could be wasteful. So, you adjust the distribution to make it more optimal, ensuring everyone gets enough candies without excess.
- Updating Remaining Candies: After each round of distribution, you keep track of how many candies are left in the bag. This helps you manage the distribution better in subsequent rounds.
- Simulating Party Rounds: You repeat this distribution process for each round of the party (test execution), ensuring that each friend gets their fair share of candies according to the dynamically adjusted block size.
- The allocateBlockSize() function dynamically calculates the ideal block size per vuser based on the remaining unique values and the total number of virtual users. It adjusts the block size to ensure optimal distribution while considering thresholds.
- The simulateTestExecution() function represents the execution of the LoadRunner script actions for each virtual user, utilizing parameter values within the allocated block size.
- The runTestScenario() function orchestrates the test scenario by repeatedly allocating block sizes and simulating test execution across multiple iterations.