Technical Factors:
Testing team formation: in general test planning process starts with testing team formation. In this stage test lead is depending on below factors.
è Under stand ability
TEST CLOSER: After completion of all possible test execution and bugs resolving, test lead concentrate on test closer to stop testing process. In this review test lead is depending on below factors.
Testing team formation: in general test planning process starts with testing team formation. In this stage test lead is depending on below factors.
è Under stand ability
TEST CLOSER: After completion of all possible test execution and bugs resolving, test lead concentrate on test closer to stop testing process. In this review test lead is depending on below factors.
- Meet customer requirements in terms of Functionality
- Meet customer expectations in terms of Performance, Usability, Security, etc…
- Non-Technical Factors:
- Reasonable cost to purchase.
- Time to release.
1.1 Software Quality Assurance (SQA):
The Monitoring and Measuring the strength of development process is called Software Quality Testing. Ex: Life Cycle Testing.
1.2 Software Quality Control (SQC):
The validation of final product before releasing to the customer is called as SQC.
In the above Software Development process, testing is been conducted as a single stage by the development team. To improve quality in software development process, project management should concentrate on multiple stages of development with multiple stages of testing.
Fish model software development: Upper angle is life cycle development and lower angle is life cycle testing.
Information gathering defines what and Analysis defines how.
BRS: Business requirement specification defines the requirement the customer to be developed as software. This document is also known as Customer Requirement Specification (CRS) or User Requirement Specification (URS).
SRS: Software requirement specification defines functional requirements to be developed and system requirement to be used (Hardware and Software).
Example: BRS defines addition (Customer requirement). SRS defines how to solve customer requirement.
Review: It is a static testing technique. In this review responsible people will estimate completeness and correctness of corresponding documents.
HLD: High Level Design document defines the overall architecture of the system from root functionalities to leaf functionalities. This HLD is also known as Architectural Design or External Design.
LLD: Low Level Design document defines the internal logic of corresponding module (or) functionality. The LLD is also known as Internal Logic Design document.
Prototype: A sample model of an application without functionality is called prototype.
Program : A set of execute statements is called a Program.
Module : A set of programs is called as a Module or Unit.
Build : The set of modules is called as Software Build or Product.
White Box Testing: It is a coding level testing technique to verify the completeness and correctness of program structure. Programmers will follow this technique. It is also known as Glass Box Testing (or) Clear Box Testing (or) Open Box Testing.
Black Box Testing: It is a build level testing technique. In this testing test engineers will validate every feature depending on external interface.
Software Testing: The Verification and Validation of a software application is called software testing.
Verification: Are we building the product right?
Validation: Are we building the right product?
3 V-Model
V stands for Verification and Validation. This model defines mapping between development process and testing process.
3.1 Refinement form of V-Model
The real V-model is expensive to follow for small and medium scale organizations. Due to this reason, small and medium scale organizations maintains separate testing team for System Testing phase.
3.2 Reviews during Analysis
In general the software development process starts with requirements gathering and analysis. In this phase business analyst category people will develop BRS and SRS. After development of the documents the same business analyst category people will conduct review meetings to estimate completeness and correctness of the documents. In this review meeting, the same business analyst category people will concentrate on below checklist. BRS è SRS.
1. Are the requirements correct?
2. Are the requirements complete?
3. Are they achievable (w.r.t Technology)?
4. Are they reasonable (w.r.t Time)?
5. Are they testable?
3.3 Reviews during Design
After completion of analysis and then reviews the design category people will develop HLD’s and LLD’s. The same design category people will conduct review meetings to estimate completeness and correctness of the design documents. In the review the same design category people will concentrate on below checklist HLD è LLD.
1. Does the design understandable?
2. Are the correct requirements met?
3. Does the design complete?
4. Does the design follow able (w.r.t Coding)?
5. Do they handle errors?
3.4 Unit Testing
After completion of design and their reviews, programmers will concentrate on coding to construct software physically. In this phase programmers will test every program through a set of white box testing techniques w.r.t LLD.
1. Basis paths testing.
2. Control structure testing.
3. Program technique testing (Time).
4. Mutation Testing
3.4.1 Basis Path Testing
In this coverage programmers will verify the execution of program without any syntax and run time errors. In this coverage programmers will execute a program more than one time to cover all areas of that program coding while running.
3.4.2 Control Structure Testing
In this coverage programmers will concentrate on correctness of the program functionality. In this coverage programmers will check statements in the program including variables declaration, IF conditions, Loops, etc….
3.4.3 Program Technique Coverage
In this coverage programmers will verify the execution time of program to improve speed in processing. If the execution time is not reasonable then the programmers will change the structure of the program without disturbing functionality.
3.4.4 Mutation Testing
After completion of a program testing, the corresponding programs will review the completeness and correctness of the program testing. Mutation means that a change in coding of the program, in this Mutation testing programmers will perform changes in various areas in the program and repeat previously completed tests. If all the tests are passed on the changed program, then the program will continue testing on some program. If any one of the tests is failed on the change in program, then the program will concentrate on further coding.
Note: in white box Testing techniques, the first 3 techniques will test program code and the mutation testing will estimate the completeness and correctness of the test on the program.
3.5 Integration Testing
After completion of dependent programs development and unit testing, programmers will inter connect the programs to construct a complete software build. In this stage programmers will verify integration of programs in four types of approaches.
a. Top Down Approach.
b. Bottom Up Approach.
c. Hybrid Approach.
d. System Approach.
3.5.1 Top Down Approach
In this approach the programmers will inter connect main models to some of the modules in the place of remaining sub modules programmers will use temporary programs called Stubs.
3.5.2 Bottom up Approach
In this approach the programmers will inter connect sub modules without connection to the main module. Programmers will use a temporary program instead of main module called Driver.
3.5.3 Hybrid Approach
It is a combined approach of Top Down and Bottom Up approaches. This approach is also known as Sandwich approach.
3.5.4 System Approach
It is also known as Final Integration (or) Big Bang Approach. In this integration programmers will inter connect programs after completion of total development.
Note: In general the programmers will inter connect programs through any one of the above methods depending on circumstances.
3.6 System Testing
After completion of Integration Testing, and receiving the build from development team, the testing team will concentrate on system testing to conduct using Black Box Testing techniques.
System Testing is divided into 3 sub stages.
1. Usability Testing.
2. Functional Testing
3. Non-Functional Testing.
3.6.1 Usability Testing
After receiving software build from development team, the testing team will conduct usability testing. In this test the testing team will estimate “User Friendly ness” of all screens in the software build. There are two sub tests.
3.6.1.1 User Interface Testing or UI Testing
In this test, the testing team will apply below 3 factors on every screen of the software build.
· Ease of use: To estimate understandability of screen.
· Look and Feel: To estimate attractiveness of screen.
· Speed in Interface: To estimate length of navigation as short.
3.6.1.2 Manual Support Testing
During this test the testing team will validate the correctness and completeness of help documents. These help documents are also known as User Manuals.
Case Study:
3.6.2 Functional Testing
It is a mandatory testing level in testing team responsibilities. During this test, testing team will concentrate on “Meet customer Requirements” through below sub tests.
a. Requirement Testing.
b. Sanitation Testing.
3.6.2.1 Requirements Testing
It is also known as Functionality Testing. During this test the responsible testing team will apply different coverage techniques as discussed below on the functionalities of software build.
· GUI Coverage / Behavioral Coverage: Changes in properties of objects in screens while operating.
· Error Handling Coverage: To prevent wrong operation on screens.
· Input Domain Coverage: Testing correct type and size of input values
· Manipulations Coverage: Returning correct output values.
· Back End Coverage: Valid impact of screens operations on back end data base tables.
· Functionalities Order Coverage: The arrangements of screens in the software build with respect to order of functionalities.
3.6.2.2 Sanitation Testing
During this test the testing team will concentrate on extra functionalities with respect to requirements of the customer. This testing is also known as garbage testing.
3.6.3 Non-Functional Testing
After completion of user interface and functional testing, the testing team will concentrate on Non-Functional Testing to validate quality characteristics of software build Like Security and Performance.
3.6.3.1 Recovery Testing
This testing is also known as Reliability Testing. During this test, the testing team will validate that whether the software build is changing from abnormal state to normal state.
3.6.3.2 Compatibility Testing
It is also known as Portability Testing. During this test, the testing team will validate that whether the software build is running on the customer expected platforms or not?.
3.6.3.3 Configuration Testing
It is also known as hardware compatibility testing. During this test the testing team will validate that whether the software build is supporting different technology hardware devices or not?
Example: Different technology printers.
Different topology networks, etc….
3.6.3.4 Inter Systems Testing
It is also known as End-to-End Testing. During this test the testing team will validate that whether the software build co-exists with other software applications to share common resources.
Example: Sharing data, sharing hardware devices, printers, speakers, sharing memory, etc….
3.6.3.5 Installation Testing
During this test, the testing team will establish customer site like configured environment. The testing team is practice installation of software build in to that environment.
3.6.3.6 Load Testing
The execution of the software build under customer expected configuration and customer expected load to estimate speed of processing is called as load testing. Here, load means that the no of concurrent users working on the software. This is also known as scalability testing.
3.6.3.7 Stress Testing
The execution of the software build under customer expected configuration and various load levels from low to peak is called stress testing. In this testing, testing team will concentrate on load handling by the software build.
3.6.3.8 Storage Testing
Testing whether the system meets its specified storage objectives.
Testing the data of different formats and in different devices. Verifying the efficiency of data storage in devices and proper retrieval of the data.
3.6.3.9 Data Volume Testing
Volume testing refers to testing a software application with a certain amount of data. This amount can, in generic terms, be the database size or it could also be the size of an interface file that is the subject of volume testing. For example, if you want to volume test your application with a specific database size, you will expand your database to that size and then test the application’s performance on it.
Example: MS Access technology support 2GB of database as maximum.
3.6.3.10 Parallel Testing
It is also known as Comparative Testing. During this test, the testing team will compare the software build with other competitive software in market or with old version of same software build to estimate completeness. This is applicable only for software product but not on software applications.
3.7 User Acceptance Testing (UAT)
After completion of all possible Functional and Non-Functional tests, the project manager will concentrate on user acceptance testing to collect feedback from customer site people. There are two approaches to conduct UAT, such as a-Test and b-Test.
a-Test
|
b-Test
|
1. Software applications.
2. At development site.
3. By real customers.
|
1. Software products.
2. In customer site like environment.
3. By customers site like people.
|
Collect Feedback
|
3.8 Testing during maintenance
After completion of user acceptance test and their modifications, project management concentrate on release team formation with few developers, few testers and few hardware engineers. This release team is coming to customers site and conduct port testing.
During this port testing release team concentrate on below factors in customers site.
· Compact installation
· Overall functionality
· Input devices handling
· Output devices handling
· Secondary storage devices handling
· Co-existence with other software to share common resources
· Operating system error handling
After completion of port testing, release team provides training sessions to customer site people.
During utilization of the software, customer site people are sending change request to our organization. There are two types of change request to solved.4 Testing Terminology
Monkey Testing
A Test Engineer conducting a test on application build through the coverage of main activities only is called monkey testing or chimpanzee testing.
Exploratory Testing
A Tester conducts testing on an application build through the coverage of activities in level by level.
Ad-Hoc Testing
A Tester conducts a test on application build with respect to pre determined ideas is called ad-hoc testing.
Bigbang Testing
An organization conducting a single stage of testing after completion of entire modules development is called big bang testing or informal testing.
Incremental Testing
An organization follows the multiple stages of testing process from document level to system level is called incremental testing or formal testing.
Example: LCT (life cycle testing).
Sanity Testing
Whether the build released by the development team is stable for complete testing to be applied or not?
This observation is called sanity testing or tester acceptance testing (TAT) or build verification testing (BVT).
Smoke Testing
An extra shake-up in sanity testing is called smoke testing. In this phase test engineer will try to find the reason when the build is not working before start working.
Static versus Dynamic Testing
A tester conduct a test on application build without running during testing is called static testing.
Example: Usability, Alignment, Font, Style …..Etc.
A tester conduct a test through the execution of our application build is called dynamic testing.
Example: Functional, Performance and Security Testing.
Manual Vs Automation Testing
A Test Engineer conducts a test on application build without using any third party testing tool is called Manual Testing.
A Tester conducts a test on application build with the help of a testing tool is called test Automation Testing.
A Test impact indicates test repetition with multiple test data.
Example: functionality testing.
A Test criticality indicates that complexity to execute the test manually.
Example: load testing.
Re-Testing
The re-execution of a test on same application build with multiple test data is called re-testing.
Ex: multiple
Expected: Result = Input 1 * Input 2
Regression Testing
The re-execution of selected test on modified build to ensure bug fix work and occurrences of side effects is called regression testing.
Error : A mistake in coding is called Error.
Defect: A test engineer found mismatch due to mistakes in coding during testing is called
Defect or issue.
Bug : A defect accepted by developers to be solved is called Bug.
5.1 Test Policy
It is a company level document and developed by quality control people. (almost management) this document defines “testing objective” to be achieved.
Address of company
Location of company
|
Testing Definition: Verification + Validation.
Testing Process: Proper planning before starts testing.
Testing Standard: 1 defect per 250 loc/1defect per 10fp.
Testing Measurements: QAM, TMM, PCM.
Signature of
(C.E.O)
|
LOC – Lines of code.
FP – Functional point.
Example: No of screens / No of forms / No. of reports / No. of inputs / No. of outputs / No. of queries.
Using functional point we can know the size of project.
QAM - Quality Assessment Measurements.
TMM - Test Management Measurements.
PCM - Process Capability Measurements.
5.2 Test Strategy
It is also a company level document and developed by quality analyst people (project manager level). This strategy document defines testing approach to be followed by testing team.
Scope and objective:
The purpose of testing in our organization.
Business issues
Budget control for testing
Ex:
Testing approach
mapping between development stages and testing issues. Ex: v-model
Development Stages
Testing
Issues
|
Information gathering & Analysis
|
Design
|
Coding
|
System Testing
|
Maintenance
|
Ease of Use
Authorization
.
.
|
X
√
|
X
√
|
√
√
|
√
√
|
Depends on change request
|
Test Responsibility Matrix (TRM) / Test Matrix (TM)
Test deliverables
names of testing documents to be prepared by testing team during every project testing.
Roles and responsibilities
names of jobs in testing team and responsibilities of every job during testing.
Communication and status reporting
required negotiation between every two consecutive jobs in testing team.
Test automation and testing tool
purpose of automation and availability of testing tools in your organization.
Defect reporting and tracking
required negotiation between testing team and development team when testers got mismatches in testing.
Testing measurements and metrics
QAM, TMM, PCM
Risks and mitigations
expected failures raised during testing and solutions to overcome.
Change and configuration management
how to handle sudden changes in customer requirements during testing.
Training plan
required number of sessions to understand customer requirements by testing team.
TEST FACTORS OR TESTING ISSUES
To define one quality software. Software engineering people are using 15 factors/issues.
Authorization
whether a user is valid or not? To connect to application.
Access control
whether a valid user have permissions to use specific service or not?
Audit trail
Maintains Meta data about operations.
Continuity of processing
integration of modules.
Correctness
Meet customer requirements in terms of functionality.
Coupling
Co-existence with other software applications to share common resources.
Ease of use
user friendliness of screens.
Ease of operate
Installation, uninstallation, dumping, down loading, up loading,—-etc.
File integrity
creation of back up during operations.
Reliability
recover from abnormal situations.
Portable
run on different platforms.
Performance
speed of processing.
Service levels
order of functionalities.
Maintainable
whether our application build is long time serviceable to customer site people or not?
Methodology
whether our testing team is following standards or not? ( during testing)
Case study
TEST FACTORS VS TESTING TECHNIQUES
Authorization:
Security testing (separate testing team)
Functionality/ requirement testing (common testing team)
Access control:
Security testing (separate testing team)
Functionality/ requirement testing (common testing team)
Audit trail
functionality/ requirement testing
Continuity of processing
integration testing (top down/ bottom up/ hybrid)
Correctness
functionality testing
Coupling
intersystem testing
Ease of use
user interface testing , manual support testing
Ease of operate
installation testing
File integrity
functionality/ requirement testing
Recovery testing
Reliability
recovery testing (1 user level)
Stress testing (peak load)
Portable
compatibility testing
Configuration testing
Performance
load testing, stress testing, storage testing, data volume testing.
Service levels
functionality/ requirements testing (1 user level)
Stress testing (peak load level)
Maintainable
compliance testing
methodology
compliance testing
Compliance testing
whether the testing team is following standards or not during testing, is called compliance testing. Compliance means that complete plan.
5.3 Test Methodology
It is a project level document. This document defines required testing approach for corresponding project testing. project manager like people are developing test methodology depending on company level test strategy. Due to this reason, test methodology is also known as a refinement form of test strategy.
To develop test methodology, project manager/quality analyst follows below approach. Before start every project testing.
Step 1 :- collect test strategy.
Step 2 :- identify current project type.
Project Type
|
Information Gathering &
Analysis
|
Design
|
Coding
|
System Testing
|
Maintenance
|
Traditional
Outsourcing
Maintenance
|
√
√
√
|
√
√
√
|
√
√
√
|
√
√
√
|
√
√
√
|
Note:- depending on project type, project manager delete some of the columns from TRM (test responsibility matrix) for this project testing.
Step 3 :- study project requirements.
Note: – depending on requirements in the project, PM delete unwanted factors (rows) from TRM for this project testing.
Step 4: – determine the scope of project requirements.
Note: – depending on expected future enhancements, PM is adding some of previously deleted factors to TRM for this project testing.
Step 5: – identify tactical risks.
Note: – depending on analyzed risks, PM is deleting some of the factors from selected TRM for this project testing.
CASE STUDY: 15 ç Test factors
-3 ç Requirements
——
12
+1 ç Scope of requirements
——
13
-4 ç Risks
——
9 ç Finalized to be applied on project
step 6 :- finalized TRM for current project testing.
Step 7 :- prepare system test plan.
Step 8 :- prepare module test plan.
TESTING PROCESS:
PET PROCESS
This process developed by HCL chennai it is also a refinement form of v-model. This process defines mapping between development process and testing process. From this process model, organizations are maintaining separate testing team for functional and system testing. The remaining stages of testing are done by developers.
PET stands for process experts tools and technology.
5.4 Test Plan
After completion of test initiation and testing process finalization, test lead category people are concentrating on test plan document preparation with “what to test?”, “how to test?” , “when to test?” , “who to test?” . in this test plan document preparation test lead is following below approach.
Testing team formation: in general test planning process starts with testing team formation. In this stage test lead is depending on below factors.
è Availability of test engineers
è Test duration the 3 factors are dependent
è Availability of test environment resources
Case study:
Test Duration: C/S, Web, ERP è 3 to 5 months of system testing
System s/w è 7 to 9 months of system testing
Machine critical è 12 to 15 months of system testing
Team Size: Developers : Testers = 3 : 1
Identify Tactical Risks: After formation of testing team, test lead is analyzing selected team level risks. This risk analysis is also known as Root Cause Analysis.
Ex: Risk 1 : lack of knowledge of testing team on that domain.
Risk 2 : lack of budget (time).
Risk 3 : lack of resources (testing tools not available)
Risk 4 : lack of test data (improper documents)
Risk 5 : delays in delivery
Risk 6 : lack of development process rigor
Risk 7 : lack of communication (in b/w testing team to development team)
Prepare test plan: after completion of testing team formation and risks analysis, test lead concentrate on test plan document preparation in IEEE format (Institute of Electrical and Electronics Engineers).
Format:
Test plan ID: unique number / name.
Introduction: about project
Test items: names of all modules in that project ex: website.
Features to be tested: new module names for test design. (What to test)
Features not to be tested: which ones and why not? (Copy test cases from server)
Approach: selected list of testing techniques by project manager to be applied on above modules,(finalized TRM)
Testing tasks: necessary operations to do before start every module testing.
Suspension criteria: possible raised problems during above modules testing.
Ex: exception handling.
Feature pass/fail criteria: when a module is pass and when a module is fail.
Test environment: required hardware’s and software’s to conduct testing on above modulus. Ex: WinRunner
Test deliverables: names of testing documents to be prepared during above modulus
EX: test cases, test procedures, test scripts, test log, defect reports for every modules.
Staff and training needs: names of selected test engineers for this project testing
Responsibilities: mapping between names of test engineers and names of modules.
(Work allocation)
Schedule: dates and times.
Risks and mitigations: raised problems during testing and solutions to overcome.
Approvers: signatures of project manager and test lead.
Review test plan
After completion of first copy of test plan document development, test lead conducts a review on that document for completeness and correctness. In this review meeting test lead concentrate on coverage analysis.
Coverage analysis:
è Business requirement based coverage (what to test?)
è TRM based coverage (how to test?)
è Risks based coverage (when & who to test?)
After finalizations of test plan, test lead is providing some training sessions to selected testing team on project requirements.
5.5 Test Design
After finalization of test plan and after completion of training sessions, test engineers are concentrating on test cases development for responsible modules. There are three methods to prepare test cases such as:
· Business logic based test case design (depending on srs)
· Input domain based test case design (design documents)
· User interface based test case design.
Business logic based test case design
In general, test engineers are preparing maximum test cases depending on use cases in srs. Every use case is describing functionality in terms of inputs, process and outputs.
From the above model test engineers are preparing test cases depending on that use case. Every use case is also known as functional specification. Every test case describes a testable condition to be applied on build.
To study use cases, test engineers are following below approach.
Step1: collect required use cases for responsible modules.
Step2: selecting a use case and their dependencies from above collected list of use case.
Step2.1: identify entry condition (base state)
Step2.2: identify input required (test data)
Step2.3: identify output and out come (expected)
Step2.4: study normal flow (navigation)
Step2.5: study end condition (end state)
Step2.6: study alternative flow and exceptions
Step3: prepare test cases depending on above study of use cases
Step4: review the test cases for completeness and correctness
Step5: go to step2 until all use cases study completion
Test case format
during test design test engineers are preparing test cases in IEEE format. Through these formats test engineers are documenting every test case.
Format:
Test case id: unique number/name
Test case name: the name of test conditions.
Feature to be tested: corresponding module or function name.
Test suit id: the corresponding batch id, in that batch this case is also member.
Priority: the importance of test case in terms of functionality.
EX: P0—basic functionality (requirements)
P1—general functionality (recovery, Compatability, inter systems, load—)
P2—cosmetic functionality (user inter face)
Test environment: required hard wares and soft wares including testing tool to execute this test case.
Test efforts: (person/hour) time to execute this test case .
EX: 20 min average time
Test duration: Date and time
Test setup: necessary tasks to do before start this case execution
Test procedure: this step-by-step procedure from base state to end state
Test case pass/fail criteria: when this case is pass/ when this case is fail
NOTE: in general, test engineers are not maintaining complete format for every test case. They can try to maintain test procedure as manitary for every test case.
Input domain based test case design
in general, test engineers are preparing test cases depending on use cases or functional specifications in srs. Some times they can go to depending on design documents also. Because, use cases are not providing complete information about size and type of input objects. Due to this reason, test engineers are studying data models in design documents.
EX: ER-diagrams (entity relation ship diagrams)
In this study, test engineers are following below approach
Step1: collect data models of responsible modules from design documents
Ex: ER-diagrams
Step2: study every input attribute in terms of size and type with constraints
Step3: prepare BVA and ECP for every input attribute in below format
I/P Attribute
|
ECP
|
BVA (Size/Range)
| ||
Valid
|
Invalid
|
Min
|
Max
| |
This table is called DATA MATRIX. This table is providing information about every object
Step4: identify critical and non-critical inputs in above list
Ex:
Critical inputs are involving in internal manipulations. Non-critical inputs used for printing purpose.
NOTE: If our test case is covering an operation, then test engineers are preparing step-by-step procedure from base state to end state. If our test case is covering an object, then test engineers are preparing data matrix.
User interface based test case design
To conduct usability testing, test engineers are preparing test cases depending on global user inter face convection, our organization rules and interest of customer site people.
Example test cases:
Test case1: spelling check
Test case2: graphics check (alignment, font, style, color and other micro soft six rules)
Test case3: meaning full error messages
Test case4: accuracy of data display
Test case5: accuracy of data in the database as a result of user input
Test case6: accuracy of data in the database as a result of external factors
Ex: file attachment, export files, import files etc
Test case7: meaning full help messages
NOTE: test case1 to test case6 are indicating user inter face testing and test case7 is indicating manuals support testing.
Test design review
Before receiving build from development team to start test execution, test lead is analyzing the completeness and correctness of prepared test cases by test engineers through a review meeting. In this review test lead is depending on coverage analysis.
—Business requirement based coverage
—Use cases based coverage
—Data model based coverage
—User inter face based coverage
—Test responsibility matrix based coverage
At the end of this review, test lead is preparing requirements trace ability matrix (RTM). This matrix defines mapping between customer requirements and prepare test cases. This matrix is also known as requirements validation matrix (RVM).
5.6 Test Execution
after completion of test design and their reviews, testing team is receiving initial build from development team to start test execution.
Levels of test execution:
Level –0 testing on initial build
Level –1 testing on stable build
Level –2 testing on modified stable build
Level –3 testing on master build (ready to release)
Levels of Test Execution Vs Test Cases:
Level –0 è Initial build è All Po test cases (Basic functionality).
Level –1 è Stable build è All Po, P1, and p2 Test cases as test batches.
Level –2 è Modified build è Selected Po, P1, and P2 test cases w.r.t modifications.
Level –3 è Master build è Selected Po, P1, and P2 test cases w.r.t bug density.
Build version control
In general, testing team is receiving build from development team. With the help of existing network protocols.
From the above model, testing team is receiving build from development team through FTP. To access soft base in network server. Soft base in server consists of old builds and modified builds; development people are assigning unique version number to that builds. This version numbering system is under stand able to testing team. For this build version controlling, development people are using version control tools.
Ex: vss (visual source safe)
Test harness: test harness means that readiness to start test execution.
Level-0 (sanity testing): after receiving initial build from development team, testing team is concentrating on sanity testing to estimate stability of that build to be applied complete testing. In this preliminary testing, testing team concentrates on basic functionality of that build. In this functionality coverage, testing team concentrate on below factors.
è Under stand ability
è Operatability
è Absorbability
è Controllability Testability
è Consistency
è Simplicity
è Maintainability
è Automat ability
From the above 8 factors, sanity testing is also known as testability testing or octangle testing, tester acceptance testing and build verification testing.
Test automation if possible: After receiving stable build from development team, test engineers are creating automated test scripts with required checkpoints, If possible all test cases are not automat able. Due to this reason test engineers are making automation test scripts for repeatable and complex test cases.
Case study: In general, testing teams are following selective automation only. In this selective automation test engineers are creating test scripts using testing tools for functionality or requirements test cases and load/stress test cases.
Test Execution Type
|
Testing Techniques
|
Testing Tools
|
Comments
|
Manual
|
UI testing manuals support testing
|
–
|
Easy to conduct
|
Manual / automation
|
Functionality testing
|
WR, QTP, Robot, Silk Test
|
Basic functionality testing is repeatable
|
Manual
|
Recovery, Compatability, Configuration, Inter systems, Installation, Sanitation and Parallel Testing
|
–
|
No tools in market
|
Manual / with Automation
|
Load and Stress Testing
|
Load Runner, SQA load Test, Silk Performer, J meter
|
Expensive manually and complex to conduct
|
Manual
|
Storage and data volume testing security testing
|
–
|
No tools in market for this type of testing.
|
Level-1 (comprehensive testing): After receiving stable build and after completion of all possible automation, testing team arrange test cases as batches. Every test batch consists of the set of dependent test cases. These test batches are also known as test suit or test set. During these test batches execution, test engineers are preparing test log documents. This test log document consists of three types of entries.
–Passed, all expected values are equal to actual values
–Failed, any one expected value vary with actual
–Blocked, postponed due to in correct parent functionality
Level-2 (regression testing): During level-1/comprehensive testing, test engineers are reporting mismatches to development team. After receiving modified build from development team, test engineers are concentrating on regression testing, test engineers are following below approach with respect to seriousness of that mismatches.
Case1: If the development team resolved bug severity is high, then test engineers are re-executing all p0, all p1 and carefully selected p2 test cases on that modified build with respect to modifications.
Case2: If the development team resolved bug severity is medium, then test engineers are re-executing all p0, carefully selected p1 and some of p2 test cases with respect to modifications.
Case3: If the development team resolved bug severity is low then test engineers are re-executing some of p0, p1 and p2 test cases with respect to modifications.
Case4: If the development team released modified build due to sudden changes in customer requirements then test engineers are re-executing all p0, all p1 and carefully selected p2 test cases with respect to that change in the requirements.
9.Error, Defect, Bug: A mistake in coding is called error.
Coding errors found by testing team during testing called defect or issues.
Testing team reported issues accepted by development team to be solved, called bug.
5.7 Test Reporting Or Defect Tracking
During level-1 and level-2 test execution, test engineers are reporting mismatches to development team in IEEE format.
Format:
defect id: unique number and name.
description: summary of defect
build version id: version number of build, in which test engineers found this defect
feature: the corresponding module name, in which test engineer found this defect
test case name: the name of test condition, during this case execution, test engineer found this defect
reproducable: yes, means defect appears every time in test execution No, means defect appears rarely in test execution
if yes, attach test procedure
if no, attach snap shot and strong reasons
fonud by: the name of test engineer
detected on: date of submission
assigned to: the responsible person at development side to receive this defect
status: New – Re-reporting defect or Reopen – Reporting defect first time
severity: The seriousness of defect in terms of functionality
High – Not able to continue test execution with out resolving that defect
Medium – Able to continue remaining testing but compulsory to solve
Low – Able to continue remaining testing but optional to resolve (may/may not)
priority: the importance of this defect in terms of customer
suggested fix (optional): expected possibilities to resolve this defect by developers
fixed by: project manager/project lead
resolved by: programmer name
resolved on: date of resolving
resolution type:
approved by: signature of project manager
NOTE: in above format development people try to change priority of defect with respect to customer importance
Defect age: The time gap between defect reported date and defect resolved date is called defect age
Defect submission:
Large-scale organizations
Small & medium scale organizations
Defect status:
Bug life cycle:
Defect resolution type: During test execution, test engineers are reporting mismatches to development team as defects. After receiving defect reporting from testing team, development people are conducting bug-fixing review and they will send resolution type report to corresponding testing team. There are 12 types of resolutions to report to testing team.
duplicate: Rejected due to this defect equal to previously reported defect.
enhancement: rejected due to this defect related to future requirement of the customer
hard ware limitation: rejected due to this defect related to limitations of hard ware devices
soft ware limitation: rejected due to this defect related to limitations of soft ware technologies
not applicable: rejected due to improper meaning of defect
functions as designed: rejected due to coding is correct with respect to design logic
need more information: not accepted and not rejected but developer’s required extra information to under stand the defect
not reproducible: not accepted and not rejected but developer’s required correct producer to reproduce that defect
no plan to fix it: not accepted and not rejected but developer’s required extra time to fix
fixed: accepted and ready to resolve
fixed indirectly (deferred): accepted but postponed to future version
user misunder standing: extra negotiation between developers and tester
TYPES OF BUGS: During test execution either in manual or in automation, test engineers are finding below types of bugs.
users inter face bugs: (low severity)
Ex1: spelling mistake (high priority)
Ex2: improper right alignment (low priority)
Error handling bugs: (medium severity)
Ex1: does not return error message (high priority)
Ex2: complex meaning in error message (low priority)
Input domain bugs: (medium severity)
Ex1: allows in valid inputs (high priority)
Ex2: allows in valid type also (low priority)
Calculations bugs: (high severity)
Ex1: dependent out puts are wrong (application show stopper) (high priority)
Ex2: find out put is wrong (module show stopper) (low priority)
Race condition bugs: (high severity)
Ex1: deadlock or hang (application show stopper) (high priority)
Ex2: improper order of functionalities (low priority)
Load condition bugs: (high severity)
Ex1: does not allows multiple users (high priority)
Ex2: does not allows customer expected load (low priority)
Hard ware bugs: (high severity)
Ex1: not able to establish connection to hard ware device (high priority)
Ex2: wrong out put from device (low priority)
Version control bugs: (medium severity)
Ex1: mis matches in between two consecutive build versions
ID-control bugs: (medium severity)
Ex: wrong logo, logo missing, copy right window missing, wrong version number, soft ware title mistake, team members names missing——etc.
Source bugs: (medium severity)
Ex: mistakes in help documents
TEST CLOSER: After completion of all possible test execution and bugs resolving, test lead concentrate on test closer to stop testing process. In this review test lead is depending on below factors.
Coverage analysis:
Business requirements based coverage
Use cases based coverage
Data model based coverage
In put domain based coverage
User inter face based coverage
Test responsibilities matrix based coverage
Bug density:
Ex: A – 20%
B – 20%
C – 40% ç Need for Regression
D – 20%
100%
1. Analysis of deferred bug: whether the deferred bugs are postponable or not?
At the end of this review meeting, test lead can go to select high bug density module in our application build for final regression (level-3)
Above approach is also known as level-3 testing (or) final regression testing (or) pre-acceptance testing (or) post mortem testing
After completion of this final regression, testing team concentrate on user acceptance with the help of real customers (or) model customers.
USER ACCEPTANCE TESTING: After completion of test closer test management is concentrating on user acceptance testing to collect feedback from customer site people. There are two approaches to conduct user acceptance testing such as
SIGN OFF: After completion of user acceptance testing and their modifications, test lead is preparing final test summary report. It is a part in “soft ware release note”(SRN). This final test summary report consists of below documents as members.
·
Test methodology
Test methodology
· Test plan
· Requirements trace ability matrix
· Automated test scripts
· Final bugs summary report
Bug description
|
Found by
|
Feature
|
Severity
|
Status (closed/deferred)
|
Comments
|
è Final test summary report (FSTR)
Case study: (five months of testing process)
Deliverable
|
Responsibility
|
Completion time
|
Test Cases Preparation
Test Cases Review
Requirements Traceability Matrix
Test Automation
Test Execution (level-1 & Level-2)
Defect Reporting
Communication and Status
Reporting
Test Closer and Final Regression (Level-3)
User Acceptance Test
Sign Off
|
Test Engineers
Test Lead & Test Engineers
Test Lead
Test Engineers
Test Engineers
Test Engineer, Test Lead
Test Lead
Test Lead & Test Engineer
Customer site people including Testing Team
Test Lead
|
15-20 days
4-5 days
1-2 days
10-15 days
40-60 days
On going
Weekly twice
4-5 days
4-5 days
2-3 days
|
Auditing: During testing and maintaince of soft wares project and test management people are using three types of measurements and metric such as
(1) Quality assessment measurement (QAM)
(2) Test management measurement (TMM)
(3) Process capability measurement (PCM)
Quality assessment measurement: (QAM) During soft ware testing process, quality analyst or project manager is using these measurements to estimate quality assurance level in that testing process. (Monthly once)
è Stability:
Duration
|
No of Bugs
|
20%
80%
|
80%
20%
|
è Sufficiency:
· Requirements coverage (modules)
· Type-trigger analysis (what type of test completed)
Defect severity distribution: organization trend limit check.
Test management measurements: During a project testing process, test lead category people are using these measurements to estimate testing process coverage. (weekly twice)
èTest status:
· Completed cases execution
· In execution
· Yet to execute
è Delays in delivery:
· Bug arrival rate
· Bug resolution rate
· Bug ageingTechnical Factors:
Meet customer requirements in terms of Functionality
Meet customer expectations in terms of Performance, Usability, Security, etc…
Non-Technical Factors:
Reasonable cost to purchase.
Time to release.
1.1 Software Quality Assurance (SQA):
The Monitoring and Measuring the strength of development process is called Software Quality Testing. Ex: Life Cycle Testing.
1.2 Software Quality Control (SQC):
The validation of final product before releasing to the customer is called as SQC.
In the above Software Development process, testing is been conducted as a single stage by the development team. To improve quality in software development process, project management should concentrate on multiple stages of development with multiple stages of testing.
Fish model software development: Upper angle is life cycle development and lower angle is life cycle testing.
Information gathering defines what and Analysis defines how.
BRS: Business requirement specification defines the requirement the customer to be developed as software. This document is also known as Customer Requirement Specification (CRS) or User Requirement Specification (URS).
SRS: Software requirement specification defines functional requirements to be developed and system requirement to be used (Hardware and Software).
Example: BRS defines addition (Customer requirement). SRS defines how to solve customer requirement.
Review: It is a static testing technique. In this review responsible people will estimate completeness and correctness of corresponding documents.
HLD: High Level Design document defines the overall architecture of the system from root functionalities to leaf functionalities. This HLD is also known as Architectural Design or External Design.
LLD: Low Level Design document defines the internal logic of corresponding module (or) functionality. The LLD is also known as Internal Logic Design document.
Prototype: A sample model of an application without functionality is called prototype.
Program : A set of execute statements is called a Program.
Module : A set of programs is called as a Module or Unit.
Build : The set of modules is called as Software Build or Product.
White Box Testing: It is a coding level testing technique to verify the completeness and correctness of program structure. Programmers will follow this technique. It is also known as Glass Box Testing (or) Clear Box Testing (or) Open Box Testing.
Black Box Testing: It is a build level testing technique. In this testing test engineers will validate every feature depending on external interface.
Software Testing: The Verification and Validation of a software application is called software testing.
Verification: Are we building the product right?
Validation: Are we building the right product?
3 V-Model
V stands for Verification and Validation. This model defines mapping between development process and testing process.
3.1 Refinement form of V-Model
The real V-model is expensive to follow for small and medium scale organizations. Due to this reason, small and medium scale organizations maintains separate testing team for System Testing phase.
3.2 Reviews during Analysis
In general the software development process starts with requirements gathering and analysis. In this phase business analyst category people will develop BRS and SRS. After development of the documents the same business analyst category people will conduct review meetings to estimate completeness and correctness of the documents. In this review meeting, the same business analyst category people will concentrate on below checklist. BRS è SRS.
1. Are the requirements correct?
2. Are the requirements complete?
3. Are they achievable (w.r.t Technology)?
4. Are they reasonable (w.r.t Time)?
5. Are they testable?
3.3 Reviews during Design
After completion of analysis and then reviews the design category people will develop HLD’s and LLD’s. The same design category people will conduct review meetings to estimate completeness and correctness of the design documents. In the review the same design category people will concentrate on below checklist HLD è LLD.
1. Does the design understandable?
2. Are the correct requirements met?
3. Does the design complete?
4. Does the design follow able (w.r.t Coding)?
5. Do they handle errors?
3.4 Unit Testing
After completion of design and their reviews, programmers will concentrate on coding to construct software physically. In this phase programmers will test every program through a set of white box testing techniques w.r.t LLD.
1. Basis paths testing.
2. Control structure testing.
3. Program technique testing (Time).
4. Mutation Testing
3.4.1 Basis Path Testing
In this coverage programmers will verify the execution of program without any syntax and run time errors. In this coverage programmers will execute a program more than one time to cover all areas of that program coding while running.
3.4.2 Control Structure Testing
In this coverage programmers will concentrate on correctness of the program functionality. In this coverage programmers will check statements in the program including variables declaration, IF conditions, Loops, etc….
3.4.3 Program Technique Coverage
In this coverage programmers will verify the execution time of program to improve speed in processing. If the execution time is not reasonable then the programmers will change the structure of the program without disturbing functionality.
3.4.4 Mutation Testing
After completion of a program testing, the corresponding programs will review the completeness and correctness of the program testing. Mutation means that a change in coding of the program, in this Mutation testing programmers will perform changes in various areas in the program and repeat previously completed tests. If all the tests are passed on the changed program, then the program will continue testing on some program. If any one of the tests is failed on the change in program, then the program will concentrate on further coding.
Note: in white box Testing techniques, the first 3 techniques will test program code and the mutation testing will estimate the completeness and correctness of the test on the program.
3.5 Integration Testing
After completion of dependent programs development and unit testing, programmers will inter connect the programs to construct a complete software build. In this stage programmers will verify integration of programs in four types of approaches.
a. Top Down Approach.
b. Bottom Up Approach.
c. Hybrid Approach.
d. System Approach.
3.5.1 Top Down Approach
In this approach the programmers will inter connect main models to some of the modules in the place of remaining sub modules programmers will use temporary programs called Stubs.
3.5.2 Bottom up Approach
In this approach the programmers will inter connect sub modules without connection to the main module. Programmers will use a temporary program instead of main module called Driver.
3.5.3 Hybrid Approach
It is a combined approach of Top Down and Bottom Up approaches. This approach is also known as Sandwich approach.
3.5.4 System Approach
It is also known as Final Integration (or) Big Bang Approach. In this integration programmers will inter connect programs after completion of total development.
Note: In general the programmers will inter connect programs through any one of the above methods depending on circumstances.
3.6 System Testing
After completion of Integration Testing, and receiving the build from development team, the testing team will concentrate on system testing to conduct using Black Box Testing techniques.
System Testing is divided into 3 sub stages.
1. Usability Testing.
2. Functional Testing
3. Non-Functional Testing.
3.6.1 Usability Testing
After receiving software build from development team, the testing team will conduct usability testing. In this test the testing team will estimate “User Friendly ness” of all screens in the software build. There are two sub tests.
3.6.1.1 User Interface Testing or UI Testing
In this test, the testing team will apply below 3 factors on every screen of the software build.
· Ease of use: To estimate understandability of screen.
· Look and Feel: To estimate attractiveness of screen.
· Speed in Interface: To estimate length of navigation as short.
3.6.1.2 Manual Support Testing
During this test the testing team will validate the correctness and completeness of help documents. These help documents are also known as User Manuals.
Case Study:
3.6.2 Functional Testing
It is a mandatory testing level in testing team responsibilities. During this test, testing team will concentrate on “Meet customer Requirements” through below sub tests.
a. Requirement Testing.
b. Sanitation Testing.
3.6.2.1 Requirements Testing
It is also known as Functionality Testing. During this test the responsible testing team will apply different coverage techniques as discussed below on the functionalities of software build.
· GUI Coverage / Behavioral Coverage: Changes in properties of objects in screens while operating.
· Error Handling Coverage: To prevent wrong operation on screens.
· Input Domain Coverage: Testing correct type and size of input values
· Manipulations Coverage: Returning correct output values.
· Back End Coverage: Valid impact of screens operations on back end data base tables.
· Functionalities Order Coverage: The arrangements of screens in the software build with respect to order of functionalities.
3.6.2.2 Sanitation Testing
During this test the testing team will concentrate on extra functionalities with respect to requirements of the customer. This testing is also known as garbage testing.
3.6.3 Non-Functional Testing
After completion of user interface and functional testing, the testing team will concentrate on Non-Functional Testing to validate quality characteristics of software build Like Security and Performance.
3.6.3.1 Recovery Testing
This testing is also known as Reliability Testing. During this test, the testing team will validate that whether the software build is changing from abnormal state to normal state.
3.6.3.2 Compatibility Testing
It is also known as Portability Testing. During this test, the testing team will validate that whether the software build is running on the customer expected platforms or not?.
3.6.3.3 Configuration Testing
It is also known as hardware compatibility testing. During this test the testing team will validate that whether the software build is supporting different technology hardware devices or not?
Example: Different technology printers.
Different topology networks, etc….
3.6.3.4 Inter Systems Testing
It is also known as End-to-End Testing. During this test the testing team will validate that whether the software build co-exists with other software applications to share common resources.
Example: Sharing data, sharing hardware devices, printers, speakers, sharing memory, etc….
3.6.3.5 Installation Testing
During this test, the testing team will establish customer site like configured environment. The testing team is practice installation of software build in to that environment.
3.6.3.6 Load Testing
The execution of the software build under customer expected configuration and customer expected load to estimate speed of processing is called as load testing. Here, load means that the no of concurrent users working on the software. This is also known as scalability testing.
3.6.3.7 Stress Testing
The execution of the software build under customer expected configuration and various load levels from low to peak is called stress testing. In this testing, testing team will concentrate on load handling by the software build.
3.6.3.8 Storage Testing
Testing whether the system meets its specified storage objectives.
Testing the data of different formats and in different devices. Verifying the efficiency of data storage in devices and proper retrieval of the data.
3.6.3.9 Data Volume Testing
Volume testing refers to testing a software application with a certain amount of data. This amount can, in generic terms, be the database size or it could also be the size of an interface file that is the subject of volume testing. For example, if you want to volume test your application with a specific database size, you will expand your database to that size and then test the application’s performance on it.
Example: MS Access technology support 2GB of database as maximum.
3.6.3.10 Parallel Testing
It is also known as Comparative Testing. During this test, the testing team will compare the software build with other competitive software in market or with old version of same software build to estimate completeness. This is applicable only for software product but not on software applications.
3.7 User Acceptance Testing (UAT)
After completion of all possible Functional and Non-Functional tests, the project manager will concentrate on user acceptance testing to collect feedback from customer site people. There are two approaches to conduct UAT, such as a-Test and b-Test.
a-Test
|
b-Test
|
1. Software applications.
2. At development site.
3. By real customers.
|
1. Software products.
2. In customer site like environment.
3. By customers site like people.
|
Collect Feedback
|
3.8 Testing during maintenance
After completion of user acceptance test and their modifications, project management concentrate on release team formation with few developers, few testers and few hardware engineers. This release team is coming to customers site and conduct port testing.
During this port testing release team concentrate on below factors in customers site.
· Compact installation
· Overall functionality
· Input devices handling
· Output devices handling
· Secondary storage devices handling
· Co-existence with other software to share common resources
· Operating system error handling
After completion of port testing, release team provides training sessions to customer site people.
During utilization of the software, customer site people are sending change request to our organization. There are two types of change request to solved.4 Testing Terminology
Monkey Testing
A Test Engineer conducting a test on application build through the coverage of main activities only is called monkey testing or chimpanzee testing.
Exploratory Testing
A Tester conducts testing on an application build through the coverage of activities in level by level.
Ad-Hoc Testing
A Tester conducts a test on application build with respect to pre determined ideas is called ad-hoc testing.
Bigbang Testing
An organization conducting a single stage of testing after completion of entire modules development is called big bang testing or informal testing.
Incremental Testing
An organization follows the multiple stages of testing process from document level to system level is called incremental testing or formal testing.
Example: LCT (life cycle testing).
Sanity Testing
Whether the build released by the development team is stable for complete testing to be applied or not?
This observation is called sanity testing or tester acceptance testing (TAT) or build verification testing (BVT).
Smoke Testing
An extra shake-up in sanity testing is called smoke testing. In this phase test engineer will try to find the reason when the build is not working before start working.
Static versus Dynamic Testing
A tester conduct a test on application build without running during testing is called static testing.
Example: Usability, Alignment, Font, Style …..Etc.
A tester conduct a test through the execution of our application build is called dynamic testing.
Example: Functional, Performance and Security Testing.
Manual Vs Automation Testing
A Test Engineer conducts a test on application build without using any third party testing tool is called Manual Testing.
A Tester conducts a test on application build with the help of a testing tool is called test Automation Testing.
A Test impact indicates test repetition with multiple test data.
Example: functionality testing.
A Test criticality indicates that complexity to execute the test manually.
Example: load testing.
Re-Testing
The re-execution of a test on same application build with multiple test data is called re-testing.
Ex: multiple
Expected: Result = Input 1 * Input 2
Regression Testing
The re-execution of selected test on modified build to ensure bug fix work and occurrences of side effects is called regression testing.
Error : A mistake in coding is called Error.
Defect: A test engineer found mismatch due to mistakes in coding during testing is called
Defect or issue.
Bug : A defect accepted by developers to be solved is called Bug.
5.1 Test Policy
It is a company level document and developed by quality control people. (almost management) this document defines “testing objective” to be achieved.
Address of company
Location of company
|
Testing Definition: Verification + Validation.
Testing Process: Proper planning before starts testing.
Testing Standard: 1 defect per 250 loc/1defect per 10fp.
Testing Measurements: QAM, TMM, PCM.
Signature of
(C.E.O)
|
LOC – Lines of code.
FP – Functional point.
Example: No of screens / No of forms / No. of reports / No. of inputs / No. of outputs / No. of queries.
Using functional point we can know the size of project.
QAM - Quality Assessment Measurements.
TMM - Test Management Measurements.
PCM - Process Capability Measurements.
5.2 Test Strategy
It is also a company level document and developed by quality analyst people (project manager level). This strategy document defines testing approach to be followed by testing team.
Scope and objective:
The purpose of testing in our organization.
Business issues
Budget control for testing
Ex:
Testing approach
mapping between development stages and testing issues. Ex: v-model
Development Stages
Testing
Issues
|
Information gathering & Analysis
|
Design
|
Coding
|
System Testing
|
Maintenance
|
Ease of Use
Authorization
.
.
|
X
√
|
X
√
|
√
√
|
√
√
|
Depends on change request
|
Test Responsibility Matrix (TRM) / Test Matrix (TM)
Test deliverables
names of testing documents to be prepared by testing team during every project testing.
Roles and responsibilities
names of jobs in testing team and responsibilities of every job during testing.
Communication and status reporting
required negotiation between every two consecutive jobs in testing team.
Test automation and testing tool
purpose of automation and availability of testing tools in your organization.
Defect reporting and tracking
required negotiation between testing team and development team when testers got mismatches in testing.
Testing measurements and metrics
QAM, TMM, PCM
Risks and mitigations
expected failures raised during testing and solutions to overcome.
Change and configuration management
how to handle sudden changes in customer requirements during testing.
Training plan
required number of sessions to understand customer requirements by testing team.
TEST FACTORS OR TESTING ISSUES
To define one quality software. Software engineering people are using 15 factors/issues.
Authorization
whether a user is valid or not? To connect to application.
Access control
whether a valid user have permissions to use specific service or not?
Audit trail
Maintains Meta data about operations.
Continuity of processing
integration of modules.
Correctness
Meet customer requirements in terms of functionality.
Coupling
Co-existence with other software applications to share common resources.
Ease of use
user friendliness of screens.
Ease of operate
Installation, uninstallation, dumping, down loading, up loading,—-etc.
File integrity
creation of back up during operations.
Reliability
recover from abnormal situations.
Portable
run on different platforms.
Performance
speed of processing.
Service levels
order of functionalities.
Maintainable
whether our application build is long time serviceable to customer site people or not?
Methodology
whether our testing team is following standards or not? ( during testing)
Case study
TEST FACTORS VS TESTING TECHNIQUES
Authorization:
Security testing (separate testing team)
Functionality/ requirement testing (common testing team)
Access control:
Security testing (separate testing team)
Functionality/ requirement testing (common testing team)
Audit trail
functionality/ requirement testing
Continuity of processing
integration testing (top down/ bottom up/ hybrid)
Correctness
functionality testing
Coupling
intersystem testing
Ease of use
user interface testing , manual support testing
Ease of operate
installation testing
File integrity
functionality/ requirement testing
Recovery testing
Reliability
recovery testing (1 user level)
Stress testing (peak load)
Portable
compatibility testing
Configuration testing
Performance
load testing, stress testing, storage testing, data volume testing.
Service levels
functionality/ requirements testing (1 user level)
Stress testing (peak load level)
Maintainable
compliance testing
methodology
compliance testing
Compliance testing
whether the testing team is following standards or not during testing, is called compliance testing. Compliance means that complete plan.
5.3 Test Methodology
It is a project level document. This document defines required testing approach for corresponding project testing. project manager like people are developing test methodology depending on company level test strategy. Due to this reason, test methodology is also known as a refinement form of test strategy.
To develop test methodology, project manager/quality analyst follows below approach. Before start every project testing.
Step 1 :- collect test strategy.
Step 2 :- identify current project type.
Project Type
|
Information Gathering &
Analysis
|
Design
|
Coding
|
System Testing
|
Maintenance
|
Traditional
Outsourcing
Maintenance
|
√
√
√
|
√
√
√
|
√
√
√
|
√
√
√
|
√
√
√
|
Note:- depending on project type, project manager delete some of the columns from TRM (test responsibility matrix) for this project testing.
Step 3 :- study project requirements.
Note: – depending on requirements in the project, PM delete unwanted factors (rows) from TRM for this project testing.
Step 4: – determine the scope of project requirements.
Note: – depending on expected future enhancements, PM is adding some of previously deleted factors to TRM for this project testing.
Step 5: – identify tactical risks.
Note: – depending on analyzed risks, PM is deleting some of the factors from selected TRM for this project testing.
CASE STUDY: 15 ç Test factors
-3 ç Requirements
——
12
+1 ç Scope of requirements
——
13
-4 ç Risks
——
9 ç Finalized to be applied on project
step 6 :- finalized TRM for current project testing.
Step 7 :- prepare system test plan.
Step 8 :- prepare module test plan.
TESTING PROCESS:
PET PROCESS
This process developed by HCL chennai it is also a refinement form of v-model. This process defines mapping between development process and testing process. From this process model, organizations are maintaining separate testing team for functional and system testing. The remaining stages of testing are done by developers.
PET stands for process experts tools and technology.
5.4 Test Plan
After completion of test initiation and testing process finalization, test lead category people are concentrating on test plan document preparation with “what to test?”, “how to test?” , “when to test?” , “who to test?” . in this test plan document preparation test lead is following below approach.
Testing team formation: in general test planning process starts with testing team formation. In this stage test lead is depending on below factors.
è Availability of test engineers
è Test duration the 3 factors are dependent
è Availability of test environment resources
Case study:
Test Duration: C/S, Web, ERP è 3 to 5 months of system testing
System s/w è 7 to 9 months of system testing
Machine critical è 12 to 15 months of system testing
Team Size: Developers : Testers = 3 : 1
Identify Tactical Risks: After formation of testing team, test lead is analyzing selected team level risks. This risk analysis is also known as Root Cause Analysis.
Ex: Risk 1 : lack of knowledge of testing team on that domain.
Risk 2 : lack of budget (time).
Risk 3 : lack of resources (testing tools not available)
Risk 4 : lack of test data (improper documents)
Risk 5 : delays in delivery
Risk 6 : lack of development process rigor
Risk 7 : lack of communication (in b/w testing team to development team)
Prepare test plan: after completion of testing team formation and risks analysis, test lead concentrate on test plan document preparation in IEEE format (Institute of Electrical and Electronics Engineers).
Format:
Test plan ID: unique number / name.
Introduction: about project
Test items: names of all modules in that project ex: website.
Features to be tested: new module names for test design. (What to test)
Features not to be tested: which ones and why not? (Copy test cases from server)
Approach: selected list of testing techniques by project manager to be applied on above modules,(finalized TRM)
Testing tasks: necessary operations to do before start every module testing.
Suspension criteria: possible raised problems during above modules testing.
Ex: exception handling.
Feature pass/fail criteria: when a module is pass and when a module is fail.
Test environment: required hardware’s and software’s to conduct testing on above modulus. Ex: WinRunner
Test deliverables: names of testing documents to be prepared during above modulus
EX: test cases, test procedures, test scripts, test log, defect reports for every modules.
Staff and training needs: names of selected test engineers for this project testing
Responsibilities: mapping between names of test engineers and names of modules.
(Work allocation)
Schedule: dates and times.
Risks and mitigations: raised problems during testing and solutions to overcome.
Approvers: signatures of project manager and test lead.
Review test plan
After completion of first copy of test plan document development, test lead conducts a review on that document for completeness and correctness. In this review meeting test lead concentrate on coverage analysis.
Coverage analysis:
è Business requirement based coverage (what to test?)
è TRM based coverage (how to test?)
è Risks based coverage (when & who to test?)
After finalizations of test plan, test lead is providing some training sessions to selected testing team on project requirements.
5.5 Test Design
After finalization of test plan and after completion of training sessions, test engineers are concentrating on test cases development for responsible modules. There are three methods to prepare test cases such as:
· Business logic based test case design (depending on srs)
· Input domain based test case design (design documents)
· User interface based test case design.
Business logic based test case design
In general, test engineers are preparing maximum test cases depending on use cases in srs. Every use case is describing functionality in terms of inputs, process and outputs.
From the above model test engineers are preparing test cases depending on that use case. Every use case is also known as functional specification. Every test case describes a testable condition to be applied on build.
To study use cases, test engineers are following below approach.
Step1: collect required use cases for responsible modules.
Step2: selecting a use case and their dependencies from above collected list of use case.
Step2.1: identify entry condition (base state)
Step2.2: identify input required (test data)
Step2.3: identify output and out come (expected)
Step2.4: study normal flow (navigation)
Step2.5: study end condition (end state)
Step2.6: study alternative flow and exceptions
Step3: prepare test cases depending on above study of use cases
Step4: review the test cases for completeness and correctness
Step5: go to step2 until all use cases study completion
Test case format
during test design test engineers are preparing test cases in IEEE format. Through these formats test engineers are documenting every test case.
Format:
Test case id: unique number/name
Test case name: the name of test conditions.
Feature to be tested: corresponding module or function name.
Test suit id: the corresponding batch id, in that batch this case is also member.
Priority: the importance of test case in terms of functionality.
EX: P0—basic functionality (requirements)
P1—general functionality (recovery, Compatability, inter systems, load—)
P2—cosmetic functionality (user inter face)
Test environment: required hard wares and soft wares including testing tool to execute this test case.
Test efforts: (person/hour) time to execute this test case .
EX: 20 min average time
Test duration: Date and time
Test setup: necessary tasks to do before start this case execution
Test procedure: this step-by-step procedure from base state to end state
Test case pass/fail criteria: when this case is pass/ when this case is fail
NOTE: in general, test engineers are not maintaining complete format for every test case. They can try to maintain test procedure as manitary for every test case.
Input domain based test case design
in general, test engineers are preparing test cases depending on use cases or functional specifications in srs. Some times they can go to depending on design documents also. Because, use cases are not providing complete information about size and type of input objects. Due to this reason, test engineers are studying data models in design documents.
EX: ER-diagrams (entity relation ship diagrams)
In this study, test engineers are following below approach
Step1: collect data models of responsible modules from design documents
Ex: ER-diagrams
Step2: study every input attribute in terms of size and type with constraints
Step3: prepare BVA and ECP for every input attribute in below format
I/P Attribute
|
ECP
|
BVA (Size/Range)
| ||
Valid
|
Invalid
|
Min
|
Max
| |
This table is called DATA MATRIX. This table is providing information about every object
Step4: identify critical and non-critical inputs in above list
Ex:
Critical inputs are involving in internal manipulations. Non-critical inputs used for printing purpose.
NOTE: If our test case is covering an operation, then test engineers are preparing step-by-step procedure from base state to end state. If our test case is covering an object, then test engineers are preparing data matrix.
User interface based test case design
To conduct usability testing, test engineers are preparing test cases depending on global user inter face convection, our organization rules and interest of customer site people.
Example test cases:
Test case1: spelling check
Test case2: graphics check (alignment, font, style, color and other micro soft six rules)
Test case3: meaning full error messages
Test case4: accuracy of data display
Test case5: accuracy of data in the database as a result of user input
Test case6: accuracy of data in the database as a result of external factors
Ex: file attachment, export files, import files etc
Test case7: meaning full help messages
NOTE: test case1 to test case6 are indicating user inter face testing and test case7 is indicating manuals support testing.
Test design review
Before receiving build from development team to start test execution, test lead is analyzing the completeness and correctness of prepared test cases by test engineers through a review meeting. In this review test lead is depending on coverage analysis.
—Business requirement based coverage
—Use cases based coverage
—Data model based coverage
—User inter face based coverage
—Test responsibility matrix based coverage
At the end of this review, test lead is preparing requirements trace ability matrix (RTM). This matrix defines mapping between customer requirements and prepare test cases. This matrix is also known as requirements validation matrix (RVM).
5.6 Test Execution
after completion of test design and their reviews, testing team is receiving initial build from development team to start test execution.
Levels of test execution:
Level –0 testing on initial build
Level –1 testing on stable build
Level –2 testing on modified stable build
Level –3 testing on master build (ready to release)
Levels of Test Execution Vs Test Cases:
Level –0 è Initial build è All Po test cases (Basic functionality).
Level –1 è Stable build è All Po, P1, and p2 Test cases as test batches.
Level –2 è Modified build è Selected Po, P1, and P2 test cases w.r.t modifications.
Level –3 è Master build è Selected Po, P1, and P2 test cases w.r.t bug density.
Build version control
In general, testing team is receiving build from development team. With the help of existing network protocols.
From the above model, testing team is receiving build from development team through FTP. To access soft base in network server. Soft base in server consists of old builds and modified builds; development people are assigning unique version number to that builds. This version numbering system is under stand able to testing team. For this build version controlling, development people are using version control tools.
Ex: vss (visual source safe)
Test harness: test harness means that readiness to start test execution.
Level-0 (sanity testing): after receiving initial build from development team, testing team is concentrating on sanity testing to estimate stability of that build to be applied complete testing. In this preliminary testing, testing team concentrates on basic functionality of that build. In this functionality coverage, testing team concentrate on below factors.
è Under stand ability
è Operatability
è Absorbability
è Controllability Testability
è Consistency
è Simplicity
è Maintainability
è Automat ability
From the above 8 factors, sanity testing is also known as testability testing or octangle testing, tester acceptance testing and build verification testing.
Test automation if possible: After receiving stable build from development team, test engineers are creating automated test scripts with required checkpoints, If possible all test cases are not automat able. Due to this reason test engineers are making automation test scripts for repeatable and complex test cases.
Case study: In general, testing teams are following selective automation only. In this selective automation test engineers are creating test scripts using testing tools for functionality or requirements test cases and load/stress test cases.
Test Execution Type
|
Testing Techniques
|
Testing Tools
|
Comments
|
Manual
|
UI testing manuals support testing
|
–
|
Easy to conduct
|
Manual / automation
|
Functionality testing
|
WR, QTP, Robot, Silk Test
|
Basic functionality testing is repeatable
|
Manual
|
Recovery, Compatability, Configuration, Inter systems, Installation, Sanitation and Parallel Testing
|
–
|
No tools in market
|
Manual / with Automation
|
Load and Stress Testing
|
Load Runner, SQA load Test, Silk Performer, J meter
|
Expensive manually and complex to conduct
|
Manual
|
Storage and data volume testing security testing
|
–
|
No tools in market for this type of testing.
|
Level-1 (comprehensive testing): After receiving stable build and after completion of all possible automation, testing team arrange test cases as batches. Every test batch consists of the set of dependent test cases. These test batches are also known as test suit or test set. During these test batches execution, test engineers are preparing test log documents. This test log document consists of three types of entries.
–Passed, all expected values are equal to actual values
–Failed, any one expected value vary with actual
–Blocked, postponed due to in correct parent functionality
Level-2 (regression testing): During level-1/comprehensive testing, test engineers are reporting mismatches to development team. After receiving modified build from development team, test engineers are concentrating on regression testing, test engineers are following below approach with respect to seriousness of that mismatches.
Case1: If the development team resolved bug severity is high, then test engineers are re-executing all p0, all p1 and carefully selected p2 test cases on that modified build with respect to modifications.
Case2: If the development team resolved bug severity is medium, then test engineers are re-executing all p0, carefully selected p1 and some of p2 test cases with respect to modifications.
Case3: If the development team resolved bug severity is low then test engineers are re-executing some of p0, p1 and p2 test cases with respect to modifications.
Case4: If the development team released modified build due to sudden changes in customer requirements then test engineers are re-executing all p0, all p1 and carefully selected p2 test cases with respect to that change in the requirements.
9.Error, Defect, Bug: A mistake in coding is called error.
Coding errors found by testing team during testing called defect or issues.
Testing team reported issues accepted by development team to be solved, called bug.
5.7 Test Reporting Or Defect Tracking
During level-1 and level-2 test execution, test engineers are reporting mismatches to development team in IEEE format.
Format:
defect id: unique number and name.
description: summary of defect
build version id: version number of build, in which test engineers found this defect
feature: the corresponding module name, in which test engineer found this defect
test case name: the name of test condition, during this case execution, test engineer found this defect
reproducable: yes, means defect appears every time in test execution No, means defect appears rarely in test execution
if yes, attach test procedure
if no, attach snap shot and strong reasons
fonud by: the name of test engineer
detected on: date of submission
assigned to: the responsible person at development side to receive this defect
status: New – Re-reporting defect or Reopen – Reporting defect first time
severity: The seriousness of defect in terms of functionality
High – Not able to continue test execution with out resolving that defect
Medium – Able to continue remaining testing but compulsory to solve
Low – Able to continue remaining testing but optional to resolve (may/may not)
priority: the importance of this defect in terms of customer
suggested fix (optional): expected possibilities to resolve this defect by developers
fixed by: project manager/project lead
resolved by: programmer name
resolved on: date of resolving
resolution type:
approved by: signature of project manager
NOTE: in above format development people try to change priority of defect with respect to customer importance
Defect age: The time gap between defect reported date and defect resolved date is called defect age
Defect submission:
Large-scale organizations
Small & medium scale organizations
Defect status:
Bug life cycle:
Defect resolution type: During test execution, test engineers are reporting mismatches to development team as defects. After receiving defect reporting from testing team, development people are conducting bug-fixing review and they will send resolution type report to corresponding testing team. There are 12 types of resolutions to report to testing team.
duplicate: Rejected due to this defect equal to previously reported defect.
enhancement: rejected due to this defect related to future requirement of the customer
hard ware limitation: rejected due to this defect related to limitations of hard ware devices
soft ware limitation: rejected due to this defect related to limitations of soft ware technologies
not applicable: rejected due to improper meaning of defect
functions as designed: rejected due to coding is correct with respect to design logic
need more information: not accepted and not rejected but developer’s required extra information to under stand the defect
not reproducible: not accepted and not rejected but developer’s required correct producer to reproduce that defect
no plan to fix it: not accepted and not rejected but developer’s required extra time to fix
fixed: accepted and ready to resolve
fixed indirectly (deferred): accepted but postponed to future version
user misunder standing: extra negotiation between developers and tester
TYPES OF BUGS: During test execution either in manual or in automation, test engineers are finding below types of bugs.
users inter face bugs: (low severity)
Ex1: spelling mistake (high priority)
Ex2: improper right alignment (low priority)
Error handling bugs: (medium severity)
Ex1: does not return error message (high priority)
Ex2: complex meaning in error message (low priority)
Input domain bugs: (medium severity)
Ex1: allows in valid inputs (high priority)
Ex2: allows in valid type also (low priority)
Calculations bugs: (high severity)
Ex1: dependent out puts are wrong (application show stopper) (high priority)
Ex2: find out put is wrong (module show stopper) (low priority)
Race condition bugs: (high severity)
Ex1: deadlock or hang (application show stopper) (high priority)
Ex2: improper order of functionalities (low priority)
Load condition bugs: (high severity)
Ex1: does not allows multiple users (high priority)
Ex2: does not allows customer expected load (low priority)
Hard ware bugs: (high severity)
Ex1: not able to establish connection to hard ware device (high priority)
Ex2: wrong out put from device (low priority)
Version control bugs: (medium severity)
Ex1: mis matches in between two consecutive build versions
ID-control bugs: (medium severity)
Ex: wrong logo, logo missing, copy right window missing, wrong version number, soft ware title mistake, team members names missing——etc.
Source bugs: (medium severity)
Ex: mistakes in help documents
TEST CLOSER: After completion of all possible test execution and bugs resolving, test lead concentrate on test closer to stop testing process. In this review test lead is depending on below factors.
Coverage analysis:
Business requirements based coverage
Use cases based coverage
Data model based coverage
In put domain based coverage
User inter face based coverage
Test responsibilities matrix based coverage
Bug density:
Ex: A – 20%
B – 20%
C – 40% ç Need for Regression
D – 20%
100%
1. Analysis of deferred bug: whether the deferred bugs are postponable or not?
At the end of this review meeting, test lead can go to select high bug density module in our application build for final regression (level-3)
Above approach is also known as level-3 testing (or) final regression testing (or) pre-acceptance testing (or) post mortem testing
After completion of this final regression, testing team concentrate on user acceptance with the help of real customers (or) model customers.
USER ACCEPTANCE TESTING: After completion of test closer test management is concentrating on user acceptance testing to collect feedback from customer site people. There are two approaches to conduct user acceptance testing such as
SIGN OFF: After completion of user acceptance testing and their modifications, test lead is preparing final test summary report. It is a part in “soft ware release note”(SRN). This final test summary report consists of below documents as members.
·
Test methodology
Test methodology
· Test plan
· Requirements trace ability matrix
· Automated test scripts
· Final bugs summary report
Bug description
|
Found by
|
Feature
|
Severity
|
Status (closed/deferred)
|
Comments
|
è Final test summary report (FSTR)
Case study: (five months of testing process)
Deliverable
|
Responsibility
|
Completion time
|
Test Cases Preparation
Test Cases Review
Requirements Traceability Matrix
Test Automation
Test Execution (level-1 & Level-2)
Defect Reporting
Communication and Status
Reporting
Test Closer and Final Regression (Level-3)
User Acceptance Test
Sign Off
|
Test Engineers
Test Lead & Test Engineers
Test Lead
Test Engineers
Test Engineers
Test Engineer, Test Lead
Test Lead
Test Lead & Test Engineer
Customer site people including Testing Team
Test Lead
|
15-20 days
4-5 days
1-2 days
10-15 days
40-60 days
On going
Weekly twice
4-5 days
4-5 days
2-3 days
|
Auditing: During testing and maintaince of soft wares project and test management people are using three types of measurements and metric such as
(1) Quality assessment measurement (QAM)
(2) Test management measurement (TMM)
(3) Process capability measurement (PCM)
Quality assessment measurement: (QAM) During soft ware testing process, quality analyst or project manager is using these measurements to estimate quality assurance level in that testing process. (Monthly once)
è Stability:
Duration
|
No of Bugs
|
20%
80%
|
80%
20%
|
è Sufficiency:
· Requirements coverage (modules)
· Type-trigger analysis (what type of test completed)
Defect severity distribution: organization trend limit check.
Test management measurements: During a project testing process, test lead category people are using these measurements to estimate testing process coverage. (weekly twice)
èTest status:
· Completed cases execution
· In execution
· Yet to execute
è Delays in delivery:
· Bug arrival rate
· Bug resolution rate
· Bug ageing
è Test efficiency:
· Cost to find a bug
(Ex: 5 bugs/person-day)
Process capability measurements: During soft ware maintenance in customer site, QA and PM is using these measurements to improve testing team capability. (Yearly once)
è Defect escapes: (missed bugs by testing team)
a. Type- phase testing
b. Type- trigger analysis
c. Defect resolution rate (or) defect removal efficiency
i. DRE =A/(A+B)
1. Bugs found by testing team
2. Bugs found by customer during maintenance
è Test efficiency:
· Cost to find a bug
(Ex: 5 bugs/person-day)
Process capability measurements: During soft ware maintenance in customer site, QA and PM is using these measurements to improve testing team capability. (Yearly once)
è Defect escapes: (missed bugs by testing team)
a. Type- phase testing
b. Type- trigger analysis
c. Defect resolution rate (or) defect removal efficiency
i. DRE =A/(A+B)
1. Bugs found by testing team
2. Bugs found by customer during maintenance