Skip to main content
. 2015 Aug 13;7(3):343–352. doi: 10.1007/s12551-015-0177-3

Table 1.

Definition of commonly used terms in software testing

Key term Definition
Validation “The process of evaluating a system or component during or at the end of the development process to determine whether it satisfies specified requirements.” (IEEE 1990)
Verification “The process of evaluating a system or component to determine whether the products of given development phase satisfy the conditions imposed at the start of that phase.” (IEEE 1990)
Quality control “A set of activities designed to evaluate the quality of developed or manufactured products.” (IEEE 1990)
Quality assurance “A planned and systematic pattern of all actions necessary to provide adequate confidence that an item or product conforms to established technical requirements.” (IEEE 1990)
Test case “A set of test inputs, execution conditions, and expected results developed for a particular objective, such as to exercise a particular program path or to verify compliance with a specific requirement.” (IEEE 1990)
Test suite “A set of several test cases for a component or system under test, where the post condition of one test is often used as the precondition for the next one.” (ISTQB 2015)
Test reliability “A set of test data T for a program P is reliable if it reveals that P contains an error whenever P is incorrect. It is important to note that it has been proven that there is no testing strategy that can check the reliability of all programs.” (Howden 1976)
Regression testing “Testing of a previously tested program following modification to ensure that defects have not been introduced or uncovered in unchanged areas of the software, as a result of the changes made. It is performed when the software or its environment is changed.” (ISTQB 2015)
Oracle “A mechanism, which can systematically verify the correctness of a test result for any given test case.” (Liu et al. 2014)
Test oracle problem “The oracle problem occurs when either an oracle does not exist, or exists but is too expensive to be used.” (Liu et al. 2014)
Black-box testing “Testing that ignores the internal mechanism of a system or component and focuses solely on the outputs generated in response to selected inputs and execution conditions.” (IEEE 1990)
White-box testing “Testing that takes into account the internal mechanism of a system or component. Types include branch testing, path testing, statement testing.” (IEEE 1990)
Test coverage “The degree to which a given test or set of tests addresses all specified requirements for a given system or component.” (IEEE 1990)
Fault “Fault – concrete manifestation of an error within the software. One error may cause several faults, and various errors may cause identical faults.” (Lanubile et al. 1998)
Error “Defect in the human thought process made while trying to understand given information, solve problems, or to use methods and tools. In the context of software requirements specifications, an error is a basic misconception of the actual needs of a user or customer.” (Lanubile et al. 1998)
Failure “Departure of the operational software system behavior from user expected requirements. A particular failure may be caused by several faults and some faults may never cause a failure.” (Lanubile et al. 1998)
Successful test “A test that cannot reveal any error in the implemented software using given test case.” (Chen et al. 1998)
Static testing “Testing of a component or system at specification or implementation level without execution of that software, e.g. reviews or static analysis.” (ISTQB 2015)
Dynamic testing “Testing that requires the execution of the test item.” (IEEE 2013)

Majority of these terms are defined in IEEE Standard Glossary 610.12-1990 (IEEE 1990) and International Software Testing Qualification Board Glossary (ISTQB 2015) and ISO/IEC/IEEE 29119 (IEEE 2013).