Thursday 1 August 2013

Interesting things in Loadrunner: Duplicating Files Prior to Load Test

I came across a Business Process (BP) that requires the uploading of multiple files. For this, the BP in detail, requires that 300 files be uploaded into SharePoint. In order to meet the requirement, the 300 files need to be created prior the load test.

I've figured out the data preparation by creating the files prior the load test. Searching thru the Web, I've got hold of the below bat script and amended it to create the 300 files for the data preparation.

for /L %%i in (1,1,300) do
(
@copy C:\test_folder\OriginalFile.txt NewFile-"%%i".txt
)

Saving it as a bat file and renaming the "OriginalFile.txt" and "NewFile-" to your requirement (if you find it useful). This is not the only way to work with the data preparation (I believed), and another alternative is to create the file during the load test. This will require scripting knowledge of using the system API which hopefully I can discuss it in future.

Monitors: What metrics/counters to monitor for Windows System Resource?

Monitoring on Windows System Resource.
  • Processor(_Total)\% Processor Time
  • Processor(_Total)\% Privileged Time
  • Processor(_Total)\% User Time
  • Processor(_Total)\Interrupts/sec
  • Process(instance)\% Processor Time
  • Process(instance)\Working Set
  • Physical Disk (instance)\Disk Transfers/sec
  • System\Processor Queue Length
  • System\Context Switches/sec
  • Memory\Pages/sec
  • Memory\Available Bytes
  • Memory\Cache Bytes
  • Memory\Transition Faults/sec

As you've noticed, some of them are already being selected by LoadRunner. When designing the scenario for the load test, set the following counters for Windows System Resource. This is useful in the following areas: Server/Service Availability, Processor, Hardware Functionality, RAM and Disks.
  • Server/Service Availablity
  • System\System Up Time

This counter describes the time that the server was last rebooted in seconds.

Process(instance)\Elapsed Time

This counter describes the time for a certain process/instance. You can use the Process(instance) \Elapsed Time counter to also monitor processes associated with specific applications and services to monitor the availability of these applications and services.

Processor :

Processor(_Total)\% Processor Time


The Processor(_Total)\% Processor Time is useful in measuring the total utilization of your processor by all running processes. Note that if you have a multiprocessor machine, Processor (_Total)\% Processor Time actually measures the average processor utilization of your machine (i.e. utilization averaged over all processors).

Process(instance)\% Processor Time

When you know specificied process/instance is used by the application, it would be recommended to drill down to the level of monitoring the process/instance. Using this, it can be also used to detect suspicious process/instance that is utlizing the processor.

Processor(_Total)\% Privileged Time and Processor(_Total)\% User Time

For the two counters, Processor(_Total)\% Privileged Time represents the time taken for kernel- related operations such as handling OS housekeeping and Processor(_Total)\% User Time represent the server running too many specific roles. For Privilged Time being too high, consider reducing the amount of OS housekeeping task and for User Time, consider increasing the hardware.

System\Processor Queue Length

If this counter is consistently higher than around 5 when processor utilization approaches 100%, then this is a good indication that there is more work (active threads) available (ready for execution) than the machine's processors are able to handle. However, take note that it is not the best indicator of Processor contention based on overloaded threads. Other counters such as ASP\Requests Queued or ASP.NET\Requests Queued can be introduced for such tasks.

Hardware Functionality

System\Context Switches/sec


This counter measures how frequently the processor has to switch from user- to kernel-mode to handle a request from a thread running in user mode. The heavier the workload running on your machine, the higher this counter will generally be, but over long term the value of this counter should remain fairly constant. If this counter suddenly starts increasing however, it may be an indicating of a malfunctioning device, especially if you are seeing a similar jump in the Processor(_Total)\Interrupts/sec counter on your machine.

You may also want to check Processor(_Total)\% Privileged Time Counter and see if this counter shows a similar unexplained increase, as this may indicate problems with a device driver that is causing an additional hit on kernel mode processor utilization.

Processor(_Total)\Interrupts/sec

This counter measures the number of interrupts the processor have to handle over time.
RAM

Memory\Pages/sec


This counter indicates the number of paging operations to disk, and it is a primary counter to indicate possible insufficient RAM to meet your server's needs. A recommended benchmark to watch is when the number of pages per second exceeds 50 per paging disk on your system.

Memory\Available Bytes

If this counter is greater than 10% of the actual RAM in your machine then you probably have more than enough RAM and don't need to worry. If it drops below 2% of the installed RAM, you might want to dwell a little deeper with the next counter.

Process(instance)\Working Set

This counter measures the size of the working set for each process, which indicates the number of allocated pages the process can address without generating a page fault. This will assist to determine which process is consuming the larger amounts of RAM if a downward trend is developed.

Memory\Cache Bytes

On the other hand, this counter measures the working set for the system i.e. the number of allocated pages kernel threads can address without generating a page fault.

Memory\Transition Faults/sec

This counter measures how often recently trimmed page on the standby list are re-referenced. If this counter slowly starts to rise over time then it could also indicating you're reaching a point where you no longer have enough RAM for your server to function well.


Disks
Physical Disk (instance)\Disk Transfers/sec

To monitor disk activity, we can use this counter. When the measurement goes above 25 disk I/Os per second then you've got poor response time for your disk (which may well translate to a potential bottleneck. To further uncover the root cause we use the next mentioned counter.

Physical Disk(instance)\% Idle Time


This counter measures the percent time that your hard disk is idle during the measurement interval, and if you see this counter fall below 20% then you've likely got read/write requests queuing up for your disk which is unable to service these requests in a timely fashion. In this case it's time to upgrade your hardware to use faster disks or scale out your application to better handle the load.

Protocols in Loadrunner Working with RMI-Java

This document is intended for an audience that is unfamiliar with the RMI Java protocol or Java Technology. Follow the below procedures, step-by-step sequence to ensure that the Java application can be invoked and recorded properly. If you failed at a particular point of step, please apply the appropriate solution before proceeding to the next step as the sequence below is designed to ensure that all requirements are met for the recording to succeed.

This document also complements KB 11878 which described the requirements needed for LoadRunner to record and replay Java-recorded scripts.

STEPS

[1] Launch the application outside VuGen
PURPOSE: To test if the application is functioning properly outside VuGen before the recording process.

[2] Apply the Java environment using the KB 11878, "LoadRunner Java Environmental Requirements".


[3] Create an empty RMI-Java VUser and replay it.
PURPOSE: To test if the basic Java environment is properly setup.

[4] Proceed to record the RMI-Java VUser.

[5] If application does not launch during recording
DIAGNOSIS: It could be that some classes are not loaded.

SOLUTION:
Ensure the CLASSPATH is set properly

Use javap.exe on the command line to check the missing method or class. To use javap.exe, run: javap.exe (without the ".class" suffix). When the class is not in the CLASSPATH, you will get the following error message: ‘Class not found’. Otherwise you will receive all its fields and methods.

From here onwards, look at the missing classes. If there is, obtain those classes from the developers of the application, and placed them into the CLASSPATH.

Use Java-Plugin to identify the error message. Troubleshoot the error accordingly. For the plugin, it will throw the exception into its console. You will have to turn on the Java Console (in Control Panel > Java > "Advanced" Tab > Select "Show Console" in order to capture the exception or information. If exceptions were captured in the console, look at the error message and troubleshoot from there. As the exceptions may vary, you will have to see what is been thrown out in order to resolve the problem and we won't be able to advise on it. (NOTE: The steps for the Java Console is based on JRE 1.5.0_06, therefore it may vary with other versions of Java Plugin.)

[6] If application launches but not recording (Green box illustrated below does not launch)




DIAGNOSIS: It could be that the default hooking mechanism is not loaded

SOLUTION:
Ensure the hook files are properly set
If custom hooking is involved. Verify the hooks are properly configured. Click link below to access it.
http://kb-index.mercury.com/web/queries/KBAview.asp?Conceptid=11287&Product=LR
Verify -Xrunjdkhook exist in the arguments passed to invoke the Java application. Below is a sample command line.

SET java -Xrunjdkhook -Xbootclasspath/p:\classes;\classes\srv -jar Kurnia-24AUG2006_1.jar

Ensure the file user.hooks is placed in \classes.
Verify the content of cjhook.ini
Enable in recording options > Recording Properties > Debug Options. Enable all checkboxes, and set "Hooking Options" > "Printout Redirections" to "File" instead of "Console".
After doing so, useful debugging information will be saved during recording in the file"/data/console.log", and "/data/console2.log".
Verify in Hooks profile \data\console.log after performing 6.6 and 6.7.

[7] Try running the -Xrunjdkhook outside VuGen via .bat file
PURPOSE: To verify the command statement with -Xrunjdkhook is working fine when invoking outside of VuGen.

[8] If the application is running fine with the hooking mechanism (-Xrunjdkhook) running (the green box being loaded), proceed with the next step.
DIAGNOSIS: If the green box is not loaded, it means that the hooking mechanism is not working.

SOLUTION:
Please troubleshoot from here on getting the hooking mechanism running following step 6.2.

[9] If Green recording box is invoked, however, no events were recorded or hang after a few events were recorded.
DIAGNOSIS: Some Class files may not be correctly configured in the CLASSPATH.

SOLUTION:
Apply the solutions in step 5.x
Ensure that all Class files for the placed correctly in (if any).

The recording should be working properly if all steps are applied in sequence unless otherwise. You can also try to install the same version of JDK developing the application into the recording machine, if the above steps did not work.

ADDITIONAL NOTE 

1. What is the version of the JDK? It is required that JDK be installed instead of JRE.
2. Is QTP installed on the machine?
3. Try recording from another machine and see if the problem persists.

FINAL NOTE

RMI Java is concerned about the class files if they exist in order for the recording to be successful. Therefore, the fundamental is to

(1) ensure that CLASSPATH are correctly configured and
(2) if the class files launching the application and needed by the business process being recorded, EXIST in the CLASSPATH.

Minimize your script debug time by using “On demand log functions" in Load Runner

More than often, I see people using Log settings available in Run-Time settings and completely ignore the function “lr_set_debug_messag(),” which I think Mercury has thoughtfully created, because it is genuinely useful in many occasions. Say, you’re dealing with a script with 5,000 lines and you have trouble correlating a certain dynamic string which you know for sure appears after a certain request in the 2000th line is fired to the AUT. Now, if you were to use just use the Extended Log option from the Run-Time settings of LoadRunner, then, it would take huge about of time during replay to arrive till the 2000th line of the script, because it has to parse and display all the data returned from server onto the screen – right from the beginning of the script.

Here is when “lr_set_debug_messag()” comes in handy. All you have to do is place this function above the request which generates the response from where you intend to debug your script. Make sure that you turn off Extended Log option in the Run-Time settings when you use this function. This measure ensures that your extended log option is triggered only at places where you want them and hence will save you huge amount of time while debugging.

In the below example, I have created three functions and I generally place these functions inside global.h section of my script. During debug cycles, I just call “logs_on()” function when I need extended log(with data returned from server and parameter substitution details enabled) for a certain server response and I use “logs_off()” function at the line in the script from where I don’t want anymore logs to be displayed. And I use “logclear()” function when LoadRunner doesn’t respond to logs_off() function(It is a known defect.)

logs_on()
{
lr_set_debug_message(LR_MSG_CLASS_EXTENDED_LOG |LR_MSG_CLASS_RESULT_DATA| LR_MSG_CLASS_PARAMETERS , LR_SWITCH_ON );
}

logs_off()
{
lr_set_debug_message(LR_MSG_CLASS_EXTENDED_LOG |LR_MSG_CLASS_RESULT_DATA| LR_MSG_CLASS_PARAMETERS , LR_SWITCH_OFF );
}

void logclear(void)
{
unsigned int log_options = lr_get_debug_message();
lr_set_debug_message(log_options, LR_SWITCH_OFF);
return;
}

What is the difference between SIT and UAT in testing?

What is SIT(System integration testing)?

It is testing performed when two systems, generally presumed stable themselves, are integrated with one another. For example, this could be when an inventory management system is integrated with a sales accounting system. Each system feeds into the other.

SIT is System integration testing.It is one of the phases of testing where you perform System testing and Integration testing combined together on a regular basis.It a different process altogether for which specific tools are there.

The goal of systems integration testing is to ensure that the data crossing the boundary between systems is received, stored and used appropriately by the receiving system. Until integration begins, testing of the isolated systems is done on mocked or replayed data and not on "live" data. Integration testing is the final step before customer acceptance testing.

What is UAT(User Acceptance Testing)?

User Acceptance Testing is often the final step before rolling out the application.

Usually the end users who will be using the applications test the application before ‘accepting’ the application.

This type of testing gives the end users the confidence that the application being delivered to them meets their requirements.

This testing also helps nail bugs related to usability of the application.

User Acceptance Testing – Prerequisites:

Before the User Acceptance testing can be done the application is fully developed.
Various levels of testing (Unit, Integration and System) are already completed before User Acceptance Testing is done. As various levels of testing have been completed most of the technical bugs have already been fixed before UAT.

User Acceptance Testing – What to Test?

To ensure an effective User Acceptance Testing Test cases are created.
These Test cases can be created using various use cases identified during the Requirements definition stage.
The Test cases ensure proper coverage of all the scenarios during testing.

During this type of testing the specific focus is the exact real world usage of the application. The Testing is done in an environment that simulates the production environment.
The Test cases are written using real world scenarios for the application

User Acceptance Testing – How to Test?

The user acceptance testing is usually a black box type of testing. In other words, the focus is on the functionality and the usability of the application rather than the technical aspects. It is generally assumed that the application would have already undergone Unit, Integration and System Level Testing.

However, it is useful if the User acceptance Testing is carried out in an environment that closely resembles the real world or production environment.

The steps taken for User Acceptance Testing typically involve one or more of the following:
.......1) User Acceptance Test (UAT) Planning
.......2) Designing UA Test Cases
.......3) Selecting a Team that would execute the (UAT) Test Cases
.......4) Executing Test Cases
.......5) Documenting the Defects found during UAT
.......6) Resolving the issues/Bug Fixing
.......7) Sign Off

What is SAP R/3 Systems? what is the meaning of R/3?

SAP R/3 is the former name of the main enterprise resource planning software produced by SAP AG.

The first version of SAP's flagship enterprise software was a financial Accounting system named R/1. (The "R" was for "Realtime data processing"). The pronunciation is often mistakenly referred to as "sap", as in tree sap. The correct naming is the individual letters S-A-P. This was replaced by R/2 at the end of the 1970s. SAP R/2 was a mainframe based business application software suite that was very successful in the 1980s and early 1990s. It was particularly popular with large multinational European companies who required soft-real-time business applications, with multi-currency and multi-language capabilities built in. With the advent of distributed client-server computing SAP AG brought out a client-server version of the software called SAP R/3 that was manageable on multiple platforms and operating systems, such as Microsoft Windows or UNIX since 1999, which opened up SAP to a whole new customer base. SAP R/3 was officially launched on 6 July 1992. SAP came to dominate the large business applications market over the next 10 years.

SAP R/3 is arranged into distinct functional modules, covering the typical functions in place in an organization. The most widely used modules are Financials and Controlling (FICO), Human Resources (HR), Materials Management (MM), Sales & Distribution (SD), and Production Planning (PP). Those modules, as well as the additional components of SAP R/3, are detailed in the next section.

Each module handles specific business tasks on its own, but is linked to the others where applicable. For instance, an invoice from the Billing transaction of Sales & Distribution will pass through to accounting, where it will appear in accounts receivable and cost of goods sold.

SAP has typically focused on best practice methodologies for driving its software processes, but has more recently expanded into vertical markets. In these situations, SAP produces specialized modules (referred to as IS or Industry Specific) geared toward a particular market segment, such as utilities or retail.

Using SAP often requires the payment of hefty license fees, as the customers have effectively outsourced various business software development tasks to SAP. By specializing in software development, SAP hopes to provide a better value to corporations than they could if they attempted to develop and maintain their own applications.

SAP R/3 is a client/server based application, utilizing a 3-tiered model. A presentation layer, or client, interfaces with the user. The application layer houses all the business-specific logic, and the database layer records and stores all the information about the system, including transactional and configuration data.

SAP R/3 functionality is structured using its own proprietary language called ABAP (Advanced Business Application Programming). ABAP, or ABAP/4 is a fourth generation language (4GL), geared towards the creation of simple, yet powerful programs. R/3 also offers a complete development environment where developers can either modify existing SAP code to modify existing functionality or develop their own functions, whether reports or complete transactional systems within the SAP framework.

ABAP's main interaction with the database system is via Open SQL statements. These statements allow a developer to query, update, or delete information from the database. Advanced topics include GUI development and advanced integration with other systems. With the introduction of ABAP Objects, ABAP provides the opportunity to develop applications with object-oriented programming.

The most difficult part of SAP R/3 is its implementation. Simply because SAP R/3 is never used the same way in any two places. For instance, Atlas Copco can have a different implementation of SAP R/3 from Procter & Gamble and so forth. Two primary issues are the root of the complexity and of the differences:

Customization configuration - Within R/3, there are tens of thousands of database tables that may be used to control how the application behaves. For instance, each company will have its own accounting "Chart of Accounts" which reflects how its transactions flow together to represent its activity. That will be specific to a given company. In general, the behavior (and appearance) of virtually every screen and transaction is controlled by configuration tables. This gives the implementor great power to make the application behave differently for different environments. With that power comes considerable complexity.
Extensions, Bolt-Ons - In any company, there will be a need to develop interface programs to communicate with other corporate information systems. This generally involves developing ABAP/4 code, and considerable "systems integration" effort to either determine what data is to be drawn out of R/3 or to interface into R/3 to load data into the system.
Due to the complexity of implementation, these companies recruit highly skilled SAP consultants to do the job. The implementation must consider the company's needs and resources. Some companies implement only a few modules of SAP while others may want numerous modules.

Why should QA have their own QA Environment? What are the pros and cons?

If QA is using a dev environment, that environment is likely changing. It is hard to control the state of the environment with multiple people working against it.

Typically, QA environments mimic more closely production environments than do development environments. This helps ensure functionality is valid in an environment that is more production-like.

Developers tend to have lots of tools and things running in their environment that could affect QA validation.

Why are there so many environments in software development?

From an IT (information technology) standpoint, having these disparate systems for making changes, testing changes, checking that the changes are correct is an attempt to make sure that your PRODUCTION system remains protected from human and mechanical error.

DEVelopment is usually used by the developers (ie. programmers, configurators, etc) as they work on individual
programs, apply program patches or updates, or turn (configure) new functionality.
Most, if not ALL, changes are done here. Dev will usually contain a much smaller set of data than is in production.

The completed changes are then "migrated" to QA.

QA (AKA Quality Assurance) will usually have a larger amount of data and generally NO changes or updates are directly done in this environment. It is used by both the developers (IT) and selected "users" to test the migrated changes in a pseudo production environment to see if the changes actually work correctly.

QA and the STAGE environments may, in some sites, be one and the same. Or they may be separate platforms.

A STAGE environment can have different roles depending on the shop.
1) It can be the QA/STAGE environment that has an exact copy of production which is used for both QA and system testing (testing of the system when alot of updates/changes or upgrade is going to go into production).

2) It may not have much data at all but is a place to keep track of the final changes that are going to go into PRODUCTION.

3) It may be an exact copy of PRODUCTION in both hardware and software which maybe have all the changes that are happening in production being applied to it in a delayed fashion. Should something happen in production (ie hardware failure, software failure, person failure (Ooops - just deleted all of the product pricing)), the STAGE environment can be brought forth into a PRODUCTION mode.

Generally, only after the changes in the QA/Stage system have been "signed off" by the programmer, user (business), supervisor authority, etc., will the changes be migrated to PRODUCTION.

PRODUCTION is the actual environment used by the company, institution, organization, etc for it’s processing needs.

You may also see a few other environments as well...

SANDBOX - Another copy of Development, QA, or Production to test major upgrades of hardware or software before applying it to the DEV, QA/STAGE, and PRODUCTION platforms.

DISASTER RECOVERY - This may be an exact copy of Production (like #3 stage above) but may be completely in another area/region from where the primary data center is. It will probably be on another power and communication grid. It'll be the goto site in case the primary site goes down and may even be having the current changes that are happening in PRODUCTION being applied to it in a synchronous or delayed fashion.

There is an overview answer since many "shops" may have a plethora of different environments and architecture schemes based on their requirements but the DEV, QA/STAGE, PRODUCTION is the most basic form.

All in all, these environments are used try to keep the PRODUCTION environment protected and stable as possible.

P.S. Doesn’t always work.