Gray box testing. Test designed based on the knowledge of algorithm, internal states, architectures, or other high -level descriptions of the program behavior. [Doug Hoffman]
Gray box testing. Examines the activity of back-end components during test case execution. Two types of problems that can be encountered during gray-box testing are:
- A component encounters a failure of some kind, causing the operation to be aborted. The user interface will typically indicate that an error has occurred.
- The test executes in full, but the content of the results is incorrect. Somewhere in the system, a component processed data incorrectly, causing the error in the results.
- [Elfriede Dustin. "Quality Web Systems: Performance, Security & Usability."]
Grooved Tests. Tests that simply repeat the same activity against a target product from cycle to cycle. [Scott Loveland, 2005]
Heuristic Testing: An approach to test design that employs heuristics to enable rapid development of test cases.[James Bach]
High-level tests. These tests involve testing whole, complete products [Kit, 1995]
HTML validation testing. Specific to Web testing. This certifies that the HTML meets specifications and internal coding standards.
W3C Markup Validation Service, a free service that checks Web documents in formats like HTML and XHTML for conformance to W3C Recommendations and other standards.
Incremental integration testing. Incremental integration testing - continuous testing of an application as new functionality is added; requires that various aspects of an application's functionality be independent enough to work separately before all parts of the program are completed, or that test drivers be developed as needed; done by programmers or by testers.
Inspection. A formal evaluation technique in which software requirements, design, or code are examined in detail by person or group other than the author to detect faults, violations of development standards, and other problems [IEEE94]. A quality improvement process for written material that consists of two dominant components: product (document) improvement and process improvement (document production and inspection).
Integration. The process of combining software components or hardware components or both into overall system.
Integration testing - testing of combined parts of an application to determine if they function together correctly. The 'parts' can be code modules, individual applications, client and server applications on a network, etc. This type of testing is especially relevant to client/server and distributed systems.
Integration Testing. Testing conducted after unit and feature testing. The intent is to expose faults in the interactions between software modules and functions. Either top-down or bottom-up approaches can be used. A bottom-up method is preferred, since it leads to earlier unit testing (step-level integration) This method is contrary to the big-bang approach where all source modules are combined and tested in one step. The big-bang approach to integration should be discouraged.
Interface Tests. Programs that probide test facilities for external interfaces and function calls. Simulation is often used to test external interfaces that currently may not be available for testing or are difficult to control. For example, hardware resources such as hard disks and memory may be difficult to control. Therefore, simulation can provide the characteristics or behaviors for specific function.
Internationalization testing (I18N) - testing related to handling foreign text and data within the program. This would include sorting, importing and exporting test and data, correct handling of currency and date and time formats, string parsing, upper and lower case handling and so forth. [Clinton De Young, 2003].
Interoperability Testing which measures the ability of your software to communicate across the network on multiple machines from multiple vendors each of whom may have interpreted a design specification critical to your success differently.
Inter-operability Testing. True inter-operability testing concerns testing for unforeseen interactions with
other packages with which your software has no direct connection. In some quarters, inter-operability testing labor equals all other testing combined. This is the kind of testing that I say shouldn’t be done because it can�t be done.[from Quality Is Not The Goal. By Boris Beizer, Ph. D.]
Inspection. A formal evaluation technique in which software requirements, design, or code are examined in detail by person or group other than the author to detect faults, violations of development standards, and other problems [IEEE94].
Install/uninstall testing. Testing of full, partial, or upgrade install/uninstall processes.
Key Word-Driven Testing. The approach developed by Carl Nagle of the SAS Institute that is offered as freeware on the Web; Key Word-Driven Test. ing is an enhancement to the data-driven methodology. [Daniel J. Mosley, 2002]
Latent bug A bug that has been dormant (unobserved) in two or more releases. [R. V. Binder, 1999]
Lateral testing. A test design technique based on lateral thinking principals, to identify faults. [Dorothy Graham, 1999]
Limits testing. See Boundary Condition testing.
Load testing. Testing an application under heavy loads, such as testing of a web site under a range of loads to determine at what point the system's response time degrades or fails.
Load stress test. A test is design to determine how heavy a load the application can handle.
Load-stability test. Test design to determine whether a Web application will remain serviceable over extended time span.
Load isolation test. The workload for this type of test is designed to contain only the subset of test cases that caused the problem in previous testing.
Longevity testing. See Reliability testing.
Long-haul Testing. See Reliability testing.
Master Test Planning. An activity undertaken to orchestrate the testing effort across levels and organizations.[Systematic Software Testing by Rick D. Craig and Stefan P. Jaskiel 2002]
Memory leak testing. Testing the server components to see if memory is not properly referenced and released, which can lead to instability and the product's crashing.
Model-Based Testing. Model-based testing takes the application and models it so that each state of each input, output, form, and function is represented. Since this is based on detailing the various states of objects and data, this type of testing is very similar to charting out states. Many times a tool is used to automatically go through all the states in the model and try different inputs in each to ensure that they all interact correctly.[Lydia Ash, 2003]
Monkey Testing. (smart monkey testing) Input are generated from probability distributions that reflect actual expected usage statistics -- e.g., from user profiles. There are different levels of IQ in smart monkey testing. In the simplest, each input is considered independent of the other inputs. That is, a given test requires an input vector with five components. In low IQ testing, these would be generated independently. In high IQ monkey testing, the correlation (e.g., the covariance) between these input distribution is taken into account. In all branches of smart monkey testing, the input is considered as a single event.[Visual Test 6 Bible by Thomas R. Arnold, 1998 ]
Monkey Testing. (brilliant monkey testing) The inputs are created from a stochastic regular expression or stochastic finite-state machine model of user behavior. That is, not only are the values determined by probability distributions, but the sequence of values and the sequence of states in which the input provider goes is driven by specified probabilities.[Visual Test 6 Bible by Thomas R. Arnold, 1998 ]
Monkey Testing. (dumb-monkey testing)Inputs are generated from a uniform probability distribution without regard to the actual usage statistics.[Visual Test 6 Bible by Thomas R. Arnold, 1998 ]
Maximum Simultaneous Connection testing. This is a test performed to determine the number of connections which the firewall or Web server is capable of handling.
Migration Testing. Testing to see if the customer will be able to transition smoothly from a prior version of the software to a new one. [Scott Loveland, 2005]
Mutation testing. A testing strategy where small variations to a program are inserted (a mutant), followed by execution of an existing test suite. If the test suite detects the mutant, the mutant is 'retired.' If undetected, the test suite must be revised. [R. V. Binder, 1999]
Multiple Condition Coverage. A test coverage criteria which requires enough test cases such that all possible combinations of condition outcomes in each decision, and all points of entry, are invoked at least once.[G.Myers] Contrast with branch coverage, condition coverage, decision coverage, path coverage, statement coverage.
Negative test. A test whose primary purpose is falsification; that is tests designed to break the software[B.Beizer1995]
Noncritical code analysis. Examines software elements that are not designated safety-critical and ensures that these elements do not cause a hazard. (IEEE)
Orthogonal array testing: Technique can be used to reduce the number of combination and provide maximum coverage with a minimum number of TC.Pay attention to the fact that it is an old and proven technique. The OAT was introduced for the first time by Plackett and Burman in 1946 and was implemented by G. Taguchi, 1987
Orthogonal array testing: Mathematical technique to determine which variations of parameters need to be tested. [William E. Lewis, 2000]
Oracle. Test Oracle: a mechanism to produce the predicted outcomes to compare with the actual outcomes of the software under test [from BS7925-1]
Parallel Testing. Testing a new or an alternate data processing system with the same source data that is used in another system. The other system is considered as the standard of comparison. Syn: parallel run.[ISO]
Penetration testing. The process of attacking a host from outside to ascertain remote security vulnerabilities. Other responsibilities of professional penetration tester is to enforce the countermeasure's for certain types of known attacks and vulnerabilities.
Performance Testing. Testing conducted to evaluate the compliance of a system or component with specific performance requirements [BS7925-1]
Performance testing can be undertaken to: 1) show that the system meets specified performance objectives, 2) tune the system, 3) determine the factors in hardware or software that limit the system's performance, and 4) project the system's future load- handling capacity in order to schedule its replacements" [Software System Testing and Quality Assurance. Beizer, 1984, p. 256]
Postmortem. Self-analysis of interim or fully completed testing activities with the goal of creating improvements to be used in future.[Scott Loveland, 2005]
Preventive Testing Building test cases based upon the requirements specification prior to the creation of the code, with the express purpose of validating the requirements [Systematic Software Testing by Rick D. Craig and Stefan P. Jaskiel 2002]
Prior Defect History Testing. Test cases are created or rerun for every defect found in prior tests of the system. [William E. Lewis, 2000]
Qualification Testing. (IEEE) Formal testing, usually conducted by the developer for the consumer, to demonstrate that the software meets its specified requirements. See: acceptance testing.
Quality. The degree to which a program possesses a desired combination of attributes that enable it to perform its specified end use.
Quality Assurance (QA) Consists of planning, coordinating and other strategic activities associated with measuring product quality against external requirements and specifications (process-related activities).
Quality Control (QC) Consists of monitoring, controlling and other tactical activities associated with the measurement of product quality goals.
Our definition of Quality: Achieving the target (not conformance to requirements as used by many authors) & minimizing the variability of the system under test
Race condition defect. Many concurrent defects result from data-race conditions. A data-race condition may be defined as two accesses to a shared variable, at least one of which is a write, with no mechanism used by either to prevent simultaneous access. However, not all race conditions are defects.
Recovery testing Testing how well a system recovers from crashes, hardware failures, or other catastrophic problems.
Regression Testing. Testing conducted for the purpose of evaluating whether or not a change to the system (all CM items) has introduced a new failure. Regression testing is often accomplished through the construction, execution and analysis of product and system tests.
Regression Testing. - testing that is performed after making a functional improvement or repair to the program. Its purpose is to determine if the change has regressed other aspects of the program [Glenford J.Myers, 1979]
More about Regression Testing
Reengineering. The process of examining and altering an existing system to reconstitute it in a new form. May include reverse engineering (analyzing a system and producing a representation at a higher level of abstraction, such as design from code), restructuring (transforming a system from one representation to another at the same level of abstraction), recommendation (analyzing a system and producing user and support documentation), forward engineering (using software products derived from an existing system, together with new requirements, to produce a new system), and translation (transforming source code from one language to another or from one version of a language to another).
Reference testing. A way of deriving expected outcomes by manually validating a set of actual outcomes. A less rigorous alternative to predicting expected outcomes in advance of test execution. [Dorothy Graham, 1999]
Reliability testing. Verify the probability of failure free operation of a computer program in a specified environment for a specified time.
Reliability of an object is defined as the probability that it will not fail under specified conditions, over a period of time. The specified conditions are usually taken to be fixed, while the time is taken as an independent variable. Thus reliability is often written R(t) as a function of time t, the probability that the object will not fail within time t.
Any computer user would probably agree that most software is flawed, and the evidence for this is that it does fail. All software flaws are designed in -- the software does not break, rather it was always broken. But unless conditions are right to excite the flaw, it will go unnoticed -- the software will appear to work properly. [Professor Dick Hamlet. Ph.D.]
Range Testing. For each input identifies the range over which the system behavior should be the same. [William E. Lewis, 2000]
Risk-Based Testing: Any testing organized to explore specific product risks.[James Bach website]
Risk management. An organized process to identify what can go wrong, to quantify and access associated risks, and to implement/control the appropriate approach for preventing or handling each risk identified.
Robust test. A test, that compares a small amount of information, so that unexpected side effects are less likely to affect whether the test passed or fails. [Dorothy Graham, 1999]
Sanity Testing - typically an initial testing effort to determine if a new software version is performing well enough to accept it for a major testing effort. For example, if the new software is often crashing systems, bogging down systems to a crawl, or destroying databases, the software may not be in a 'sane' enough condition to warrant further testing in its current state.
Scalability testing is a subtype of performance test where performance requirements for response time, throughput, and/or utilization are tested as load on the SUT is increased over time. [Load Testing Terminology by Scott Stirling ]
Scenario-Based Testing. Scenario-based testing is one way to document the software specifications and requirements for a project. Scenario-based testing takes each user scenario and develops tests that verify that a given scenario works. Scenarios focus on the main goals and requirements. If the scenario is able to flow from the beginning to the end, then it passes.[Lydia Ash, 2003]
(SDLC) System Development Life Cycle - a phases used to develop, maintain, and replace information systems. Typical phases in the SDLC are: Initiation Phase, Planning Phase, Functional Design Phase, System Design Phase, Development Phase, Integration and Testing Phase, Installation and Acceptance Phase, and Maintenance Phase.
The V-model talks about SDLC (System Development Life Cycle) phases and maps them to various test levels
Security Audit. An examination (often by third parties) of a server's security controls and may be disaster recovery mechanisms.
Sensitive test. A test, that compares a large amount of information, so that it is more likely to defect unexpected differences between the actual and expected outcomes of the test. [Dorothy Graham, 1999]
Server log testing. Examining the server logs after particular actions or at regular intervals to determine if there are problems or errors generated or if the server is entering a faulty state.
Service test. Test software fixes, both individually and bundled together, for software that is already in use by customers. [Scott Loveland, 2005]
Skim Testing A testing technique used to determine the fitness of a new build or release of an AUT to undergo further, more thorough testing. In essence, a "pretest" activity that could form one of the acceptance criteria for receiving the AUT for testing [Testing IT: An Off-the-Shelf Software Testing Process by John Watkins]
Smoke test describes an initial set of tests that determine if a new version of application performs well enough for further testing.[Louise Tamres, 2002]
Sniff test. A quick check to see if any major abnormalities are evident in the software.[Scott Loveland, 2005 ]
Specification-based test. A test, whose inputs are derived from a specification.
Spike testing. to test performance or recovery behavior when the system under test (SUT) is stressed with a sudden and sharp increase in load should be considered a type of load test.[ Load Testing Terminology by Scott Stirling ]
Standards This page lists many standards that can be related to software testing
STEP (Systematic Test and Evaluation Process) Software Quality Engineering's copyrighted testing methodology.
Stability testing. Testing the ability of the software to continue to function, over time and over its full range of use, without failing or causing failure. (see also Reliability testing)
State-based testing Testing with test cases developed by modeling the system under test as a state machine [R. V. Binder, 1999]
State Transition Testing. Technique in which the states of a system are fist identified and then test cases are written to test the triggers to cause a transition from one condition to another state. [William E. Lewis, 2000]
Static testing. Source code analysis. Analysis of source code to expose potential defects.
Statistical testing. A test case design technique in which a model is used of the statistical distribution of the input to construct representative test cases. [BCS]
Stealth bug. A bug that removes information useful for its diagnosis and correction. [R. V. Binder, 1999]
Storage test. Study how memory and space is used by the program, either in resident memory or on disk. If there are limits of these amounts, storage tests attempt to prove that the program will exceed them. [Cem Kaner, 1999, p55]
Streamable Test cases. Test cases which are able to run together as part of a large group. [Scott Loveland, 2005]
Stress / Load / Volume test. Tests that provide a high degree of activity, either using boundary conditions as inputs or multiple copies of a program executing in parallel as examples.
Stress Test. A stress test is designed to determine how heavy a load the Web application can handle. A huge load is generated as quickly as possible in order to stress the application to its limit. The time between transactions is minimized in order to intensify the load on the application, and the time the users would need for interacting with their Web browsers is ignored. A stress test helps determine, for example, the maximum number of requests a Web application can handle in a specific period of time, and at what point the application will overload and break down.[Load Testing by S. Asbock]
Structural Testing. (1)(IEEE) Testing that takes into account the internal mechanism [structure] of a system or component. Types include branch testing, path testing, statement testing. (2) Testing to insure each program statement is made to execute during testing and that each program statement performs its intended function. Contrast with functional testing. Syn: white-box testing, glass-box testing, logic driven testing.
System testing Black-box type testing that is based on overall requirements specifications; covers all combined parts of a system.
System verification test. (SVT). Testing of an entire software package for the first time, with all components working together to deliver the project's intended purpose on supported hardware platforms. [Scott Loveland, 2005]
Table testing. Test access, security, and data integrity of table entries. [William E. Lewis, 2000]
Test Artifact Set. Captures and presents information related to the tests performed.
Test Bed. An environment containing the hardware, instrumentation, simulators, software tools, and other support elements needed to conduct a test [IEEE 610].
Test Case. A set of test inputs, executions, and expected results developed for a particular objective.
Test conditions. The set of circumstances that a test invokes. [Daniel J. Mosley, 2002]
Test Coverage The degree to which a given test or set of tests addresses all specified test cases for a given system or component.
Some Types Of Software Testing Coverage read a description
Test Criteria. Decision rules used to determine whether software item or software feature passes or fails a test.
Test data. The actual (sets of) values used in the test or that are necessary to execute the test. Test data instantiates the condition being tested (as input or as pre-existing data) and is used to verify that a specific requirement has been successfully implemented (comparing actual results to the expected results). [Daniel J. Mosley, 2002]
Test Documentation. (IEEE) Documentation describing plans for, or results of, the testing of a system or component, Types include test case specification, test incident report, test log, test plan, test procedure, test report.
Test Driver A software module or application used to invoke a test item and, often, provide test inputs (data), control and monitor execution. A test driver automates the execution of test procedures.
Test-driven development (TDD) Is an evolutionary approach to development which combines test-first development where you write a test before you write just enough production code to fulfill that test and refactoring.[Beck 2003; Astels 2003]
Comparing Test-driven development (TDD) and agile model-driven development (AMDD)
Test environment bug. Bug class indicating that some test environment is found to be insufficient to support some test type or inconsistent with its specification. [DOD STD 2167A].
Test Harness A system of test drivers and other tools to support test execution (e.g., stubs, executable test cases, and test drivers). See: test driver.
Test Inputs. Artifacts from work processes that are used to identify and define actions that occur during testing. These artifacts may come from development processes that are external to the test group. Examples include Functional Requirements Specifications and Design Specifications. They may also be derived from previous testing phases and passed to subsequent testing activities.[Daniel J. Mosley, 2002]
Test Idea: an idea for testing something.[James Bach]
Test Item. A software item which is the object of testing.[IEEE]
Test Log A chronological record of all relevant details about the execution of a test.[IEEE]
Test logistics: the set of ideas that guide the application of resources to fulfilling the test strategy.[James Bach]
Test Plan. A high-level document that defines a testing project so that it can be properly measured and controlled. It defines the test strategy and organized elements of the test life cycle, including resource requirements, project schedule, and test requirements
Test Procedure. A document, providing detailed instructions for the [manual] execution of one or more test cases. [BS7925-1] Often called - a manual test script.
Test Results. Data captured during the execution of test and used in calculating the different key measures of testing.[Daniel J. Mosley, 2002]
Test Rig A flexible combination of hardware, software, data, and interconnectivity that can be configured by the Test Team to simulate a variety of different Live Environments on which an AUT can be delivered.[Testing IT: An Off-the-Shelf Software Testing Process by John Watkins ]
Test Script. The computer readable instructions that automate the execu- tion of a test procedure (or portion of a test procedure). Test scripts may be created (recorded) or automatically generated using test automation tools, programmed using a programming language, or created by a combination of recording, generating, and programming.[Daniel J. Mosley, 2002]
Test strategy. Describes the general approach and objectives of the test activities. [Daniel J. Mosley, 2002]
Test Status. The assessment of the result of running tests on software.
Test Stub A dummy software component or object used (during development and testing) to simulate the behaviour of a real component. The stub typically provides test output.
Test Suites A test suite consists of multiple test cases (procedures and data) that are combined and often managed by a test harness.
Test technique: test method; a heuristic or algorithm for designing and/or executing a test; a recipe for a test. [James Bach]
Test Tree. A physical implementation of Test Suite. [Dorothy Graham, 1999]
Testability. Attributes of software that bear on the effort needed for validating the modified software [ISO 8402]
Testability Hooks. Those functions, integrated in the software that can be invoked through primarily undocumented interfaces to drive specific processing which would otherwise be difficult to exercise. [Scott Loveland, 2005]
Testing. The execution of tests with the intent of providing that the system and application under test does or does not perform according to the requirements specification.
(TPI) Test Process Improvement. A method for baselining testing processes and identifying process improvement opportunities, using a static model developed by Martin Pol and Tim Koomen.
Test Suite. The set of tests that when executed instantiate a test scenario.[Daniel J. Mosley, 2002]
Test Workspace. Private areas where testers can install and test code in accordance with the project's adopted standards in relative isolation from the developers.[Daniel J. Mosley, 2002]
Thread Testing. A testing technique used to test the business functionality or business logic of the AUT in an end-to-end manner, in much the same way a User or an operator might interact with the system during its normal use.[Testing IT: An Off-the-Shelf Software Testing Process by John Watkins ]
Timing and Serialization Problems. A class of software defect, usually in multithreaded code, in which two or more tasks attempt to alter a shared software resource without properly coordinating their actions. Also known as Race Conditions.[Scott Loveland, 2005]
Transient bug. A bug which is evident for a short period of time. see aperiodic bug. [Peter Farrell-Vinay 2008]
Translation testing. See internationalization testing.
Thrasher. A type of program used to test for data integrity errors on mainframe system. The name is derived from the first such program, which deliberately generated memory thrashing (the overuse of large amount of memory, leading to heavy paging or swapping) while monitoring for corruption. [Scott Loveland, 2005]
Top Down Testing Technique read a description
Unit Testing. Testing performed to isolate and expose faults and failures as soon as the source code is available, regardless of the external interfaces that may be required. Oftentimes, the detailed design and requirements documents are used as a basis to compare how and what the unit is able to perform. White and black-box testing methods are combined during unit testing.
Usability testing. Testing for 'user-friendliness'. Clearly this is subjective, and will depend on the targeted end-user or customer.
Validation. The comparison between the actual characteristics of something (e.g. a product of a software project and the expected characteristics).Validation is checking that you have built the right system.
Variance. A variance is an observable and measurable difference between an actual result and an expected result.
Verification The comparison between the actual characteristics of something (e.g. a product of a software project) and the specified characteristics.Verification is checking that we have built the system right.
Volume testing. Testing where the system is subjected to large volumes of data.[BS7925-1]
Walkthrough In the most usual form of term, a walkthrough is step by step simulation of the execution of a procedure, as when walking through code line by line, with an imagined set of inputs. The term has been extended to the review of material that is not procedural, such as data descriptions, reference manuals, specifications, etc.
Walkthroughs versus Inspections This page lists some recomendations that can be related to Walkthrough and Inspection
White Box Testing (glass-box). Testing is done under a structural testing strategy and require complete access to the object's structure¡that is, the source code.[B. Beizer, 1995 p8].