Monday, May 13, 2024

Strategies to Reduce Latency and Improve Application Performance

 Reducing latency is a critical aspect of optimizing application performance and ensuring a seamless user experience. Latency refers to the delay between a user action and the corresponding system response. High latency can lead to frustration and decreased satisfaction, while low latency can make applications feel smooth and responsive.

In this blog post, we will explore 16 proven strategies that can help you reduce latency in your applications and improve performance across various areas, from databases and networking to web protocols and data processing. Each strategy is accompanied by detailed explanations and practical examples to give you a better understanding of how these approaches work and how they can benefit your projects.

Whether you're working on a web application, a mobile app, or an enterprise system, these strategies can be applied in various scenarios to enhance the speed and efficiency of your software. Implementing these strategies can lead to better user experiences and higher satisfaction, ultimately contributing to the success of your applications. Let's dive in and explore each of these strategies in detail!

1. Database Indexing:

Database indexing involves creating an index for columns in a database that are frequently used in queries. Indexes serve as shortcuts, allowing the database engine to quickly locate and retrieve data without having to perform a full table scan. However, creating too many indexes can slow down write operations and take up additional storage space. Example: In an e-commerce application, a database table contains a `product_id` column that is frequently queried to retrieve product details. By creating an index on the `product_id` column, the database can quickly locate and retrieve product information, significantly speeding up queries related to products. 2. Caching: Caching stores frequently accessed data in a fast-access storage layer such as memory or a local storage cache. When a request is made, the system first checks the cache to see if the requested data is available. If the data is found in the cache, it is returned immediately, eliminating the need to retrieve it from the original source. Example: In a news website, the latest articles are cached in memory. When a user visits the site, the cached articles are served immediately, significantly reducing the time it takes to load the homepage compared to fetching the articles from a database. 3. Load Balancing: Load balancing distributes incoming network traffic across multiple servers or instances to ensure that no single server becomes a bottleneck. This approach improves response time, reliability, and fault tolerance. Example: A popular online gaming platform has millions of users accessing the server simultaneously. Load balancing distributes the requests across multiple servers, ensuring that the gaming experience remains smooth and uninterrupted for all users. 4. Content Delivery Network (CDN): CDNs store copies of static content (e.g., images, videos, scripts, stylesheets) in multiple data centers across the globe. When a user requests content, the CDN serves it from the data center closest to the user's location, reducing the distance the data must travel and improving speed. Example: A video streaming service uses a CDN to cache and serve video content from various locations around the world. When a user requests a video, the service delivers the content from the nearest CDN node, minimizing buffering and improving the viewing experience. 5. Async Processing: Asynchronous processing allows tasks to be performed concurrently, without blocking the user interface or other tasks. This technique is useful for handling long-running operations, allowing the main application to remain responsive. Example: An online photo editing app allows users to apply filters and effects to their images. Instead of waiting for the entire editing process to complete, the app performs the editing asynchronously, allowing users to continue using the app while the processing happens in the background. 6. Data Compression: Data compression reduces the size of data being transferred over the network or stored on disk, leading to faster transmission times and reduced bandwidth usage. Compression can be applied to various types of data, including text, images, and videos. Example: When transferring large image files, compressing them using a format such as JPEG can reduce the file size significantly, resulting in faster upload and download times. 7. Query Optimization:

Query optimization involves rewriting database queries to make them more efficient. This can include using specific indexes, minimizing the number of joins, and selecting only necessary columns. Optimized queries reduce data retrieval time and improve overall performance. Example: An analytics dashboard retrieves large amounts of data from a database. By optimizing queries to select only the necessary columns and using indexes to speed up data retrieval, the dashboard's performance can be significantly improved. 8. Connection Pooling: Connection pooling manages a pool of reusable connections to the database or other services. This reduces the overhead of establishing new connections for each request, leading to faster response times and more efficient resource usage. Example: In a web application that frequently queries a database, using a connection pool allows the application to reuse existing connections rather than opening new ones for each request. This results in faster query execution and reduced latency. 9. Service Sharding: Service sharding involves breaking a monolithic service into smaller, more manageable pieces (microservices) and using sharding strategies to distribute data and workload across multiple instances. This can improve performance and reduce latency. Example: A social media platform sharding its user data across multiple databases, where each shard contains data for a subset of users. This approach reduces the load on each database and improves response times for queries related to user data. 10. Prefetching and Preloading: Prefetching and preloading involve anticipating data or content that a user might need and loading it ahead of time. This technique reduces wait times when the user actually requests the data. Example: A video streaming platform prefetches the next episode in a series based on the user's current viewing history. When the user finishes watching an episode, the next one is already loaded and ready to play. 11. HTTP/2: HTTP/2 is an improved version of the HTTP protocol that offers features such as multiplexing, header compression, and server push. These features enable more efficient network usage and can improve performance by reducing latency. Example: A web application serving multiple resources like images, scripts, and stylesheets can benefit from HTTP/2's multiplexing feature. This allows the server to send multiple resources simultaneously over a single connection, reducing load times. 12. Edge Computing: Edge computing involves processing data closer to the user by placing computing resources at the edge of the network. This reduces round-trip time and improves responsiveness. Example: A smart home device processes voice commands locally instead of sending them to a remote server. This reduces latency and improves the user experience. 13. Optimize Network Paths: Optimizing network paths involves using route optimization, traffic prioritization, and dedicated or private networks to reduce network latency. These techniques ensure that data travels the shortest and fastest route possible. Example: A financial trading platform uses optimized network paths to ensure that trade execution occurs as quickly as possible. This can involve using dedicated networks for lower latency and more reliable connections. 14. API Gateway: An API gateway manages and optimizes API traffic, handling routing, load balancing, and caching. This leads to reduced latency and improved performance for applications that rely on APIs. Example: A microservices-based application uses an API gateway to handle requests from clients. The gateway routes the requests to the appropriate microservice, balancing the load and caching frequent requests for improved performance. 15. WebSockets: WebSockets enable real-time, bidirectional communication between a client and server, providing lower latency compared to traditional polling or long-polling methods. Example: A real-time chat application uses WebSockets to maintain a persistent connection between the client and server. This allows messages to be sent and received instantly, providing a smooth and responsive chat experience. 16. Hardware Acceleration: Hardware acceleration utilizes specialized hardware such as GPUs or FPGAs to perform specific tasks more efficiently. This can improve performance and reduce latency for certain workloads. Example: A machine learning model training process can use a GPU to accelerate computations, reducing training time and improving overall performance.

Sunday, May 12, 2024

HTTP Status 500 - Request processing failed; nested exception is org.springframework.web.multipart.MultipartException: Could not parse multipart servlet request; nested exception is java.lang.IllegalStateException: org.apache.tomcat.util.http.fileupload.FileUploadBase$FileSizeLimitExceededException: The field filestream exceeds its maximum permitted size of 1048576 bytes.

The error you are encountering is a `HTTP Status 500` error indicating that a request processing failure occurred due to a `MultipartException` during the handling of a multipart servlet request. The specific cause is an `IllegalStateException` thrown because the uploaded file (`filestream`) exceeds the maximum permitted file size limit.

Here are the key points to understand about the error and how to resolve it:

Cause of the Error: The error arises because the uploaded file exceeds the maximum permitted file size set in your application's configuration. The exception specifically points to the file size limit set for file uploads, which is currently set at 1,048,576 bytes (1 MB).

Configuration for File Upload: In Spring applications, the file upload size limit is typically controlled by settings in the `multipart` configuration. The `FileSizeLimitExceededException` is a type of exception that is triggered when an uploaded file exceeds the configured size limit.

Increasing File Size Limit: You can increase the maximum permitted file size for file uploads by adjusting the relevant configuration in your application. Here are a few common places where you can set the file size limit:

    - Spring Boot: If you are using Spring Boot, you can increase the file size limit by configuring properties in `application.properties` or `application.yml`:

   - `spring.servlet.multipart.max-file-size=10MB`  # Set the maximum file size to 10 MB

 - `spring.servlet.multipart.max-request-size=10MB`  # Set the maximum request size to 10 MB

    - Servlet Configuration: If you are not using Spring Boot, you may need to adjust the file size limit in your `web.xml` configuration file, or in your servlet configuration:

        - For Apache Tomcat, you might configure the `maxFileSize` attribute in the `MultipartConfig` annotation of the servlet:

            ```java

            @MultipartConfig(maxFileSize = 10 * 1024 * 1024)  // 10 MB

           ```

        - Alternatively, you can use `web.xml` to configure the file size limits for your servlets.

  •  Locate the `web.xml` file for your Tomcat server. It is typically found in the `WEB-INF` directory of your web application.
  •  Open the `web.xml` file in a text editor.
  •  Find the `<multipart-config>` element in the `web.xml` file. It might look something like this:

```xml
<multipart-config>
<max-file-size>1048576</max-file-size>
<max-request-size>1048576</max-request-size>
<file-size-threshold>0</file-size-threshold>
</multipart-config>
```

  • Modify the `<max-file-size>` and `<max-request-size>` values to your desired maximum file size. For example, to allow a maximum file size of 10 MB, you can set them to 10485760:


```xml
<multipart-config>
<max-file-size>10485760</max-file-size>
<max-request-size>10485760</max-request-size>
<file-size-threshold>0</file-size-threshold>
</multipart-config>

```

  • Save the `web.xml` file.

4. Restart Your Application: After making the changes to your application's configuration, be sure to restart your application for the new settings to take effect.

By increasing the file size limit as per your application's requirements, you should be able to resolve the `FileSizeLimitExceededException` and allow larger file uploads in your application.

Wednesday, May 8, 2024

Uncovering Hidden Memory Usage -A Dynatrace Investigation Story

In the world of cloud computing, the challenge of optimizing costs while ensuring optimal performance is ever-present. A cost optimization investigation was conducted for a valued customer, which involved a deep dive into the complexities of memory management. Unbeknownst to the investigator, this journey would lead to the discovery of a hidden aspect of memory usage that had escaped the monitoring tools.

The scenario involved the task of reducing cloud costs for a customer by rightsizing their servers. To gain insights into resource utilization, Dynatrace, a robust monitoring tool, was utilized. However, while examining memory usage metrics, a perplexing observation was made—substantial memory consumption was occurring without any specific processes accounting for the usage.

In one instance, nearly 5GiB of memory was being utilized on a web server, but upon closer examination, the top processes accounted for only around 200MiB of memory usage. This discrepancy left the investigator puzzled as to where the rest of the memory was being utilized.

Further investigation revealed that Dynatrace's memory metrics did not provide a breakdown of the memory allocated to shared memory, concealing crucial details about actual memory usage on the system. To gain a clearer understanding, the 'free' command was run, revealing a different story.

The 'free' command output showed that a significant portion of the memory, amounting to 4.4GB in this case, was allocated to shared buffers. This shared memory allocation was attributed to the use of ramdisks, a technique for improving performance by allocating RAM as disk drives.

The scenario underscored the importance of understanding the intricacies of memory usage, particularly regarding shared memory allocations. By uncovering this hidden aspect of memory usage, valuable insights were provided to the customer, enabling informed decisions about rightsizing server instances.

In conclusion, while monitoring tools like Dynatrace are invaluable for tracking resource utilization, it is crucial to complement them with other tools and commands such as the 'free' command to gain a comprehensive understanding of system performance. This approach allows organizations to optimize costs while ensuring optimal resource utilization in their cloud environments.

This insight was inspired by a friend's post on LinkedIn, where they shared their experience with similar challenges and solutions in the realm of cloud computing.

Tuesday, May 7, 2024

How to monitor a linux server during Load Test | Monitor Linux server without any monitoring tool

Recently I came across a situation where my client doesn't have any monitoring tool but we need to monitor the servers during the load testify the server is windows we can directly use PERFMON default windows monitoring tool, but the Linux doesn't have any such features, so I have created a csv file with a simple command in Linux machine. i have triggered this for

$vmstat -n [delay [count]] | awk '{now=strftime("%Y-%m-%d %T "); print now $0}'>{file path}

Example:  vmstat -n 1 10 | awk '{now=strftime("%Y-%m-%d %T "); print now $0}'>/tmp/vmstat.csv

The given command line combines the output of the vmstat command with awk to create a log file that records the current timestamp along with the system statistics. Here's how it works:

  • vmstat -n [delay [count]]: The vmstat command is used to monitor virtual memory statistics and system performance. The -n flag disables the header output, providing only data. The command accepts optional arguments:

    • delay: The time interval, in seconds, between data samples.
    • count: The number of data samples to collect.
  • awk '{now=strftime("%Y-%m-%d %T "); print now $0}': This part of the command processes each line of the vmstat output:

    • now=strftime("%Y-%m-%d %T "): This awk code creates a timestamp in the format Year-Month-Day Hour:Minute:Second.
    • print now $0: This prints the current timestamp followed by the original line from the vmstat output.
  • >{file path}: The output of the command, which is the timestamped vmstat data, is redirected to the specified file path. For example, /tmp/vmstat.txt.

Sunday, May 5, 2024

Unlocking Java Application Performance with JProfiler | Jprofiler tool Real Time Examples

When optimizing Java application performance, having the right tools is key. JProfiler stands out as a powerful and versatile tool for exploring Java applications. From understanding memory usage to refining database interactions, JProfiler equips performance engineers with techniques to optimize their applications.

1. Memory Profiling: Unveiling Memory Leaks

Memory leaks can silently degrade Java applications' performance over time. JProfiler's Memory Profiling feature provides insights into memory allocation issues, helping identify excessive object retention and potential leaks.

Real-Time Scenario: In a financial management application, users reported slow response times when accessing their accounts. JProfiler's Memory Profiling revealed excessive object retention in the cache due to unoptimized data structures. After identifying the memory leak, the team optimized data storage and retrieval processes, leading to improved application speed and stability.

2. Thread Profiling: Navigating Thread Dynamics

Threads are essential in concurrent Java applications, but their interactions can lead to deadlocks or contention issues. JProfiler's Thread Profiling monitors thread states in real time, detecting deadlocks and contention hotspots.

Real-Time Scenario: In a multi-threaded chat application, users experienced delays in receiving messages. JProfiler's Thread Profiling revealed a deadlock caused by competing threads trying to access the same resource simultaneously. By refining the code to handle synchronization more efficiently, the team resolved the issue, resulting in a smooth, real-time messaging experience.

3. Database Profiling: Optimizing SQL Interactions

For Java applications with heavy database reliance, optimizing SQL queries and transactional behavior is crucial. JProfiler's JDBC and JPA/Hibernate probes allow scrutiny of database activity.

Real-Time Scenario: In an online shopping platform, customers complained of slow checkout times during peak hours. JProfiler's Database Profiling uncovered inefficient SQL queries causing excessive database load and longer response times. After optimizing queries and fine-tuning indices, the team significantly improved checkout speed and overall user satisfaction.

4. Telemetry Views: Real-Time Performance Monitoring

Real-time insights are indispensable in performance optimization. JProfiler's Telemetry Views provide real-time data on CPU usage, heap memory, and thread counts, helping you identify performance anomalies.

Real-Time Scenario: In a cloud-based data analytics platform, sudden CPU spikes were causing performance issues. JProfiler's Telemetry Views allowed the team to monitor resource usage and identify the root cause: a misconfigured data processing pipeline. After making necessary adjustments and reallocating resources, the platform returned to stable, high-performance operations.

5. Method Profiling: Optimizing Code Execution

JProfiler's Method Profiling offers detailed insights into method-level execution, helping to identify bottlenecks and optimize critical code paths.

Real-Time Scenario: In a game development environment, players reported lag during complex in-game calculations. JProfiler's Method Profiling helped identify the most time-consuming methods, such as those related to physics simulations. By optimizing these methods and implementing parallel processing, the team significantly enhanced game performance and responsiveness.

Friday, May 3, 2024

Setting Up IBM MQ with JMeter for JMS Testing | Testing IBM MQ with JMeter

Here is a Groovy script demonstrates how to establish a secure connection to an IBM MQ queue and send a text message using JMS (Java Message Service). It configures a JMS connection factory with connection details such as the hostname, port, channel, and queue manager name, and sets up SSL/TLS security using a keystore and trust store. Once the secure connection is established, the script creates a JMS session and message producer to send a text message to a specified MQ queue. This script is commonly used in enterprise environments where secure communication with message queues is necessary for reliable, asynchronous messaging between different systems or applications. By ensuring data confidentiality and integrity through SSL/TLS, the script facilitates secure message exchange between client and server applications.

import com.ibm.msg.client.jms.JmsConnectionFactory
import com.ibm.msg.client.jms.JmsFactoryFactory
import com.ibm.msg.client.wmq.WMQConstants
import javax.jms.Session
import javax.jms.TextMessage
import javax.jms.MessageProducer
import javax.net.ssl.SSLContext
import javax.net.ssl.SSLSocketFactory
import javax.net.ssl.*
import java.security.KeyStore
import java.io.FileInputStream

// Start of script execution
println("### Script execution started")

// Define connection details
def hostName = "ABC-XYZ88.ind.thefirm.com" // Hostname of the MQ server
def hostPort = 1414 // Port number for the MQ server
def channelName = "IND.QWD.RWDWWE" // MQ channel name
def queueManagerName = "INDSER34" // Queue manager name
def queueName = "IND.RED.REQUEST.FRER" // Queue name
def cipherSuite = "ECDHE_RSA_AES_256_GCM_SHA384" // SSL/TLS cipher suite

// Create a JMS connection factory
def factoryFactoryInstance = JmsFactoryFactory.getInstance(WMQConstants.WMQ_PROVIDER)
def connectionFactory = factoryFactoryInstance.createConnectionFactory()

try {
    // Set connection properties for the factory
    connectionFactory.setStringProperty(WMQConstants.WMQ_HOST_NAME, hostName)
    connectionFactory.setIntProperty(WMQConstants.WMQ_PORT, hostPort)
    connectionFactory.setStringProperty(WMQConstants.WMQ_CHANNEL, channelName)
    connectionFactory.setIntProperty(WMQConstants.WMQ_CONNECTION_MODE, WMQConstants.WMQ_CM_CLIENT)
    connectionFactory.setStringProperty(WMQConstants.WMQ_QUEUE_MANAGER, queueManagerName)
    
    // Load the keystore from a file
    String keystoreType = "PKCS12" // Keystore format (e.g., PKCS12)
    String keystorePath = "mq-keystore.12" // Path to the keystore file
    String keystorePassword = "keystorePassword" // Keystore password
    
    // Load the keystore
    KeyStore keyStore = KeyStore.getInstance(keystoreType)
    FileInputStream fis = new FileInputStream(keystorePath)
    keyStore.load(fis, keystorePassword.toCharArray())
    
    // Initialize the KeyManagerFactory with the keystore
    KeyManagerFactory keyManagerFactory = KeyManagerFactory.getInstance(KeyManagerFactory.getDefaultAlgorithm())
    keyManagerFactory.init(keyStore, keystorePassword.toCharArray())
    
    // Initialize the TrustManagerFactory with the keystore
    TrustManagerFactory trustManagerFactory = TrustManagerFactory.getInstance(TrustManagerFactory.getDefaultAlgorithm())
    trustManagerFactory.init(keyStore)
    
    // Initialize the SSLContext with key and trust managers
    SSLContext sslContext = SSLContext.getInstance("TLS")
    sslContext.init(keyManagerFactory.getKeyManagers(), trustManagerFactory.getTrustManagers(), new java.security.SecureRandom())
    
    // Get the SSLSocketFactory and set it in the connection factory
    SSLSocketFactory sslSocketFactory = sslContext.getSocketFactory()
    connectionFactory.setStringProperty(WMQConstants.WMQ_SSL_CIPHER_SUITE, cipherSuite)
    connectionFactory.setObjectProperty(WMQConstants.WMQ_SSL_SOCKET_FACTORY, sslSocketFactory)
    
    // Create a JMS connection and session
    def jmsConnection = connectionFactory.createConnection()
    def jmsSession = jmsConnection.createSession(false, Session.AUTO_ACKNOWLEDGE)
    
    // Create a destination queue
    def destinationQueue = jmsSession.createQueue(queueName)
    
    // Log that the setup is complete
    println("### MQ setup completed")
    
    // Create a message producer and send a text message
    def messageProducer = jmsSession.createProducer(destinationQueue)
    def textMessage = jmsSession.createTextMessage("Your message content goes here")
    messageProducer.send(textMessage)
    
    // Log that the message was sent successfully
    println("### Message sent successfully")

} catch (Exception e) {
    // Handle any exceptions that occur
    println("Exception: " + e.toString())
    e.printStackTrace()
} finally {
    // Ensure resources are closed
    try {
        messageProducer?.close()
        jmsSession?.close()
        jmsConnection?.close()
    } catch (Exception e) {
        // Handle exceptions during resource closure
        println("Exception during resource closure: " + e.toString())
        e.printStackTrace()
    }
}

Thursday, May 2, 2024

Calling an HTTP Sampler from a JSR223 Sampler in JMeter: A Step-by-Step Guide

 We can call one sampler (HTTP) from another sampler (JSR223) in JMeter. JMeter provides flexibility to customize and control the flow of your test plan using various components, including JSR223 samplers.

To call an HTTP sampler from a JSR223 sampler, you can use the JMeter API within the JSR223 sampler code. Here's an example of how you can achieve this: 1. Add a JSR223 Sampler to your test plan. 2. Choose the appropriate language (e.g., Groovy) for the JSR223 sampler. 3. Write your custom code in the script area of the JSR223 sampler to call the HTTP sampler using the JMeter API. Here's an example: ```groovy script import org.apache.jmeter.protocol.http.sampler.HTTPSampleResult; import org.apache.jmeter.protocol.http.sampler.HTTPSamplerProxy; // Get the HTTP sampler by its name def httpSampler = ctx.getCurrentSampler().getThreadContext().getVariables().getObject("HTTPSamplerProxy") // Make sure the HTTP sampler is not null if (httpSampler != null) { // Execute the HTTP sampler def httpResult = httpSampler.sample() // You can access the response code, response message, and other details from the HTTP result int responseCode = httpResult.getResponseCode() String responseMessage = httpResult.getResponseMessage() // Process the HTTP result as needed // ... } ``` In the above example, we obtain the HTTP sampler using the `getObject()` method from the JMeter `ThreadContext` and then execute the HTTP sampler using the `sample()` method. You can access and process the response details according to your requirements. Note: Make sure you have properly configured and added the HTTP sampler to your test plan before using it within the JSR223 sampler. Remember to replace the language-specific code (`groovy` in this example) if you choose a different language for your JSR223 sampler. It's worth noting that calling one sampler from another can affect the concurrency and throughput of your test. Ensure you understand the implications and adjust your test plan accordingly. import org.apache.jmeter.protocol.http.sampler.HTTPSampleResult; import org.apache.jmeter.protocol.http.sampler.HTTPSamplerProxy; // Get the HTTP sampler by its name def httpSampler = ctx.getCurrentSampler().getThreadContext().getVariables().getObject("HTTPSamplerProxy") // Make sure the HTTP sampler is not null if (httpSampler != null) { // Execute the HTTP sampler def httpResult = httpSampler.sample() // You can access the response code, response message, and other details from the HTTP result int responseCode = httpResult.getResponseCode() String responseMessage = httpResult.getResponseMessage() // Process the HTTP result as needed // ... }

Wednesday, May 1, 2024

Extracting OTP from SOAP Requests in TrueClient Protocol: A JavaScript Approach

In the TrueClient protocol, you can leverage JavaScript to extract specific values from a SOAP request, such as a one-time password (OTP). To achieve this, you can use regular expressions or string manipulation techniques. Here's a general approach:

1. Capture the SOAP Request: Start by capturing the SOAP request as an object in TrueClient.

2. Extract the OTP with JavaScript: Use a JavaScript step to work with the captured SOAP request object. Utilize regular expressions or string manipulation methods like `indexOf()` and `substring()` to extract the OTP.

3. Assign OTP to a Parameter: Once the OTP is extracted, assign it to a parameter in TrueClient so you can use it elsewhere in your script.

Here's an example of how you can extract the OTP using JavaScript:

// Assume soapRequest contains the captured SOAP request
var soapRequest = lr.evalC("YourSoapRequestObjectName.body");

// Define a regular expression pattern to match the OTP
var otpPattern = /OTP: (\d+)/;

// Match the OTP pattern in the SOAP request
var match = soapRequest.match(otpPattern);

if (match && match.length > 1) {
    // Extract the OTP value
    var otp = match[1];
    
    // Assign the OTP value to a parameter in TrueClient
    lr.set("OTP_Parameter", otp);
} else {
    // Handle the case when OTP is not found
    lr.log("OTP not found in SOAP request");
}

Replace `"YourSoapRequestObjectName"` with the name of the object containing the SOAP request in your TrueClient script. Adjust the regular expression pattern (`otpPattern`) according to the format of the OTP in your SOAP request.