2017-01-19



When you first get started in software testing (just like me), I’m pretty sure you will be confused by terms used in software testing…The reason is simple: software testing industry has been there for a long time and it keeps changing and being updated frequently while you are so new.

No wonder why you often get lost in testing terms.

In an effort to make life easier for you and me both, I consolidate and make a list of common terms used in software testing. Well, to be honest, I did not invent those terms. I just collected those terms from the best resources on the Internet and put it all together to make this list. Please feel free to add more if you find somethings interesting not in this list.

Let’s not waste any more time, I present to you all, ladies and gentlemen, the (almost) complete list of the terms used in Software testing, in alphabet order:

A | B | C | D | E | F | G |H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z

A

Application Binary Interface (ABI)

Describes the low level interface between an application program and the operating system, between an application and its libraries, or between component parts of the application. An ABI differs from an application programming interface (API) in that an API defines the interface between source code and libraries, so that the same source code will compile on any system supporting that API, whereas an ABI allows compiled object code to function without changes on any system using a compatible ABI.

Acceptance testing

The final test level. Conducted by users with the purpose to accept or reject the system before release.

Accessibility Testing

Verifying a product is accessible to the people having disabilities (visually impaired, hard of hearing etc.)

Actual result

The system status or behaviors after you conduct a test. An anomaly or deviation is when your actual results differ from the expected results.

Ad hoc testing

Testing carried out informally without test cases or other written test instructions.

Agile development

A development method that emphasizes working in short iterations. Automated testing is often used. Requirements and solutions evolve through close collaboration between team members that represent both the client and supplier.

Alpha testing

Operational testing conducted by potential users, customers, or an independent test team at the vendor’s site. Alpha testers should not be from the group involved in the development of the system, in order to maintain their objectivity. Alpha testing is sometimes used as acceptance testing by the vendor.

Anomaly

Any condition that deviates from expectations based on requirements specifications, design documents, standards etc. A good way to find anomalies is by testing the software.

Application Development Lifecycle

The process flow during the various phases of the application development life cycle.

Application Programming Interface (API)

Provided by operating systems or libraries in response to support requests for services to be made of it by computer programs

Arc Testing

See branch testing.

Automated Software Quality (ASQ)

The use of software tools, such as automated testing tools, to improve software quality.

Automated Software Testing

The use of software to control the execution of tests, the comparison of actual outcomes to predicted outcomes, the setting up of test preconditions, and other test control and test reporting functions, without manual intervention.

Automated Testing Tools

Software tools used by development teams to automate and streamline their testing and quality assurance process.

B

Backus-Naur Form (BNF)

A meta syntax used to express context-free grammars: that is, a formal way to describe formal languages.

Basic Block

A sequence of one or more consecutive, executable statements containing no branches.

Basis Path Testing

A white box test case design technique that fulfills the requirements of branch testing & also tests all of the independent paths that could be used to construct any arbitrary path through the computer program.

Basis Test Set

A set of test cases derived from Basis Path Testing. Baseline: The point at which some deliverable produced during the software engineering process is put under formal change control.

Debugging

A popular software engineering technique used to measure test coverage. Known bugs are randomly added to a program source code and the programmer is tasked to find them. The percentage of the known bugs not found gives an indication of the real bugs that remain.

Behavior

The combination of input values and preconditions along with the required response for a function of a system. The full specification of a function would normally comprise one or more behaviors.

Benchmark Testing

Benchmark testing is a normal part of the application development life cycle. It is a team effort that involves both application developers and database administrators (DBAs), and should be performed against your application in order to determine current performance and improve it. If the application code has been written as efficiently as possible, additional performance gains might be realized from tuning the database and database manager configuration parameters. You can even tune application parameters to meet the requirements of the application better.

Benchmark Testing Methods

Benchmark tests are based on a repeatable environment so that the same test run under the same conditions will yield results that you can legitimately compare. You might begin benchmarking by running the test application in a normal environment. As you narrow down a performance problem, you can develop specialized test cases that limit the scope of the function that you are testing. The specialized test cases need not emulate an entire application to obtain valuable information. Start with simple measurements, and increase the complexity only when necessary.

Beta testing

Test that comes after alpha test, and is performed by people outside of the organization that built the system. Beta testing is especially valuable for finding usability flaws and configuration problems.

Binary Portability Testing

Testing an executable application for portability across system platforms and environments, usually for conformation to an ABI specification.

Big-bang integration

An integration testing strategy in which every component of a system is assembled and tested together; contrast with other integration testing strategies in which system components are integrated one at a time.

Black box testing

Testing in which the test object is seen as a “black box” and the tester has no knowledge of its internal structure. The opposite of white box testing.

Block Matching

Automated matching logic applied to data and transaction driven websites to automatically detect blocks of related data. This enables repeating elements to be treated correctly in relation to other elements in the block without the need for special coding.

See TestDrive-Gold

Bottom-up integration

An integration testing strategy in which you start integrating components from the lowest level of the system architecture. Compare to big-bang integration and top-down integration.

Boundary value analysis

A black box test design technique that tests input or output values that are on the edge of what is allowed or at the smallest incremental distance on either side of an edge. For example, an input field that accepts text between 1 and 10 characters has six boundary values: 0, 1, 2, 9, 10 and 11 characters.

Branch

A conditional transfer of control from any statement to any other statement in a component, or an unconditional transfer of control from any statement to any other statement in the component except the next statement, or when a component has more than one entry point, a transfer of control to an entry point of the component.

Branch Condition Coverage

The percentage of branch condition outcomes in every decision that has been tested.

Branch Condition Combination Coverage

The percentage of combinations of all branch condition outcomes in every decision that has been tested.

Branch Condition Combination Testing

A test case design technique in which test cases are designed to execute combinations of branch condition outcomes.

Branch Condition Testing

A technique in which test cases are designed to execute branch condition outcomes. Branch Testing: A test case design technique for a component in which test cases are designed to execute branch outcomes.

Breadth Testing

A test suite that exercises the full functionality of a product but does not test features in detail.

BS 7925-1

A testing standards document containing a glossary of testing terms. BS stands for ‘British Standard’.

BS 7925-2

A testing standard document that describes the testing process, primarily focusing on component testing. BS stands for ‘British Standard’.

Bug

A slang term for fault, defect, or error. Originally used to describe actual insects causing malfunctions in mechanical devices that predate computers. The International Software Testing Qualifications Board (ISTQB) glossary explains that “a human being can make an error (mistake), which produces a defect (fault, bug) in the program code, or in a document. If a defect in code is executed, the system may fail to do what it should do (or do something it shouldn’t), causing a failure. Defects in software, systems or documents may result in failures, but not all defects do so.”

See also debugging.

C

Capture/playback tool

See record and playback tool.

CAST

A general term for automated testing tools. Acronym for computer-aided software testing.

Cause-Effect Graph

A graphical representation of inputs or stimuli (causes) with their associated outputs (effects), which can be used to design test cases.

Capability Maturity Model for Software (CMM)

The CMM is a process model based on software best-practices effective in large-scale, multi-person projects. The CMM has been used to assess the maturity levels of organization areas as diverse as software engineering, system engineering, project management, risk management, system acquisition, information technology (IT) or personnel management, against a scale of five key processes, namely: Initial, Repeatable, Defined, Managed and Optimized.

Capability Maturity Model Integration (CMMI)

Capability Maturity Model® Integration (CMMI) is a process improvement approach that provides organizations with the essential elements of effective processes. It can be used to guide process improvement across a project, a division, or an entire organization. CMMI helps integrate traditionally separate organizational functions, set process improvement goals and priorities, provide guidance for quality processes, and provide a point of reference for appraising current processes. Seen by many as the successor to the CMM, the goal of the CMMI project is to improve the usability of maturity models by integrating many different models into one framework.

Certification

The process of confirming that a system or component complies with its specified requirements and is acceptable for operational use.

CCB

See change control board

Change control board

A group responsible for evaluating, prioritizing, and approving/rejecting requested changes to an IT system.

Change request

A type of document describing a needed or desired change to the system.

Checklist

A simpler form of test case, often merely a document with short test instructions (“one-liners”). An advantage of checklists is that they are easy to develop. A disadvantage is that they are less structured than test cases. Checklists can complement test cases well. In exploratory testing, checklists are often used instead of test cases.

Client

The part of an organization that orders an IT system from the internal IT department or from an external supplier/vendor.

Code Complete

A phase of development where functionality is implemented in its entirety; bug fixes are all that are left. All functions found in the Functional Specifications have been implemented.

Code coverage

A generic term for analysis methods that measure the proportion of code in a system that is executed by testing. Expressed as a percentage, for example, 90 % code coverage.

Code-Based Testing

The principle of structural code based testing is to have each and every statement in the program executed at least once during the test. Based on the premise that one cannot have confidence in a section of code unless it has been exercised by tests, structural code based testing attempts to test all reachable elements in the software under the cost and time constraints. The testing process begins by first identifying areas in the program not being exercised by the current set of test cases, follow by creating additional test cases to increase the coverage.

Code Inspection

A formal testing technique where the programmer reviews source code with a group who ask questions analyzing the program logic, analyzing the code with respect to a checklist of historically common programming errors, and analyzing its compliance with coding standards.

Code review

Code review is systematic examination (sometimes referred to as peer review) of computer source code. It is intended to find mistakes overlooked in the initial development phase, improving the overall quality of software

Code standard

Description of how a programming language should be used within an organization.

Code Walkthrough

A formal testing technique where source code is traced by a group with a small set of test cases, while the state of program variables is manually monitored, to analyze the programmer’s logic and assumptions.

Compatibility Testing

The process of testing to understand if software is compatible with other elements of a system with which it should operate, e.g. browsers, Operating Systems, or hardware.

Compilation

The activity of translating lines of code written in a human-readable programming language into machine code that can be executed by the computer.

Complete Path Testing

See exhaustive testing.

Component

The smallest element of the system, such as class or a DLL.

Component integration testing

Another term for integration test.

Component testing

Test level that evaluates the smallest elements of the system. Also known as unit test, program test and module test.

Component Specification

A description of a component’s function in terms of its output values for specified input values under specified preconditions.

Computation Data Use

A data use not in a condition. Also called C-use.

Configuration management

Routines for version control of documents and software/program code, as well as managing multiple system release versions.

Configuration testing

A test to confirm that the system works under different configurations of hardware and software, such as testing a website using different browsers.

Concurrent Testing

Multi-user testing geared towards determining the effects of accessing the same application code, module or database records. See Load Testing

Condition

A Boolean expression containing no Boolean operators. For instance, A<B is a condition but A and B is Not.

Condition Coverage

See branch condition coverage.

Condition Outcome

The evaluation of a condition to TRUE or FALSE.

Conformance Testing

The process of testing to determine whether a system meets some specified standard. To aid in this, many test procedures and test setups have been developed, either by the standard’s maintainers or external organizations, specifically for testing conformance to standards. Conformance testing is often performed by external organizations; sometimes the standards body itself, to give greater guarantees of compliance. Products tested in such a manner are then advertised as being certified by that external organization as complying with the standard.

Context Driven Testing

The context-driven school of software testing is similar to Agile Testing that advocates continuous and creative evaluation of testing opportunities in light of the potential information revealed and the value of that information to the organization right now.

Control Flow

An abstract representation of all possible sequences of events in a program’s execution.

Control Flow Graph

The diagrammatic representation of the possible alternative control flow paths through a component.

Control Flow Path:

See path

Conversion Testing

Testing of programs or procedures used to convert data from existing systems for use in replacement systems.

Correctness

The degree to which software conforms to its specification. Coverage: The degree, expressed as a percentage, to which a specified coverage item has been tested.

Coverage Item

An entity or property used as a basis for testing.

COTS

Commercial Off the Shelf. Software that can be bought on the open market. Also called “packaged” software.

Cyclomatic Complexity

A software metric (measurement). It was developed by Thomas McCabe and is used to measure the complexity of a program. It directly measures the number of linearly independent paths through a program’s source code

D

Data Case

Data relationship model simplified for data extraction and reduction purposes in order to create test data.

Data Definition

An executable statement where a variable is assigned a value.

Data Definition C-use Coverage

The percentage of data definition C-use pairs in a component that is exercised by a test case suite.

Data Definition C-use Pair

A data definition and computation data use, where the data use uses the value defined in the data definition.

Data Definition P-use Coverage

The percentage of data definition P-use pairs in a component that is tested.

Data Definition-use Coverage

The percentage of data definition-use pairs in a component that is exercised by a test case suite.

Data Definition-use Pair

A data definition and data use, where the data uses the value defined in the data definition.

Data Definition-use Testing

A test case design technique for a component in which test cases are designed to execute data definition-use pairs.

Data Dictionary

A database that contains definitions of all data items defined during analysis.

Data Driven Testing

A framework where test input and output values are read from data files and are loaded into variables in captured or manually coded scripts. In this framework, variables are used for both input values and output verification values. Navigation through the program, reading of the data files, and logging of test status and information are all coded in the test script.

Data Flow Diagram

A modeling notation that represents a functional decomposition of a system.

Data Flow Coverage

Test coverage measure based on variable usage within the code. Examples are data definition-use coverage, data definition P-use coverage, data definition C-use coverage, etc.

Data Flow Testing

Data-flow testing looks at the lifecycle of a particular piece of data (i.e. a variable) in an application. By looking for patterns of data usage, risky areas of code can be found and more test cases can be applied.

Data Protection

Technique in which the condition of the underlying database is synchronized with the test scenario so that differences can be attributed to logical changes. This technique also automatically re-sets the database after tests – allowing for a constant data set if a test is re-run.

Data Protection Act

UK Legislation surrounding the security, use and access of an individual’s information. May impact the use of live data used for testing purposes.

Data Use

An executable statement where the value of a variable is accessed.

Database Testing

The process of testing the functionality, security, and integrity of the database and the data held within.

Data Flow Diagram

A modeling notation that represents a functional decomposition of a system.

Data Flow Coverage

Test coverage measure based on variable usage within the code. Examples are data definition-use coverage, data definition P-use coverage, data definition C-use coverage, etc.

Data Flow Testing

Data-flow testing looks at the lifecycle of a particular piece of data (i.e. a variable) in an application. By looking for patterns of data usage, risky areas of code can be found and more test cases can be applied.

Data Protection

Technique in which the condition of the underlying database is synchronized with the test scenario so that differences can be attributed to logical changes. This technique also automatically re-sets the database after tests – allowing for a constant data set if a test is re-run.

Data Protection Act

UK Legislation surrounding the security, use and access of an individual’s information. May impact the use of live data used for testing purposes.

Data Use

An executable statement where the value of a variable is accessed. Database Testing: The process of testing the functionality, security, and integrity of the database and the data held within.

Daily build

A process in which the test object is compiled every day in order to allow daily testing. While it ensures that defect reports are reported early and regularly, it requires automated testing support.

Debugging

The process in which developers identify, diagnose, and fix errors found.

Decision table

A test design and requirements specification technique. A decision table describes the logical conditions and rules for a system. Testers use the table as the basis for creating test cases.

Defect

A flaw in a component or system that can cause the component or system to fail to perform its required function. A defect, if encountered during execution, may cause a failure of the component or system.

Defect report

A document used to report a defect in a component, system, or document. Also known as an incident report.

Deliverable

Any product that must be delivered to someone other than the author of the product. Examples of deliverables are documentation, code and the system.

Delta Release

A delta, or partial, release is one that includes only those areas within the release unit that have actually changed or are new since the last full or delta release. For example, if the release unit is the program, a delta release contains only those modules that have changed, or are new, since the last full release of the program or the last delta release of certain modules.

Dependency Testing

Examines an application’s requirements for pre-existing software, initial states and configuration in order to maintain proper functionality.

Depth Testing

A test that exercises a feature of a product in full detail.

Desk checking

A static testing technique in which the tester reads code or a specification and “executes” it in his mind.

Design-Based Testing

Designing tests based on objectives derived from the architectural or detail design of the software (e.g., tests that execute specific invocation paths or probe the worst case behavior of algorithms).

Dirty Testing

Testing which demonstrates that the system under test does not work. (Also known as negative testing).

Documentation Testing

Testing concerned with the accuracy of documentation.

Domain

The set from which values are selected.

Domain Expert

A person who has significant knowledge in a specific domain.

Domain Testing

Domain testing is the most frequently described test technique. The basic notion is that you take the huge space of possible tests of an individual variable and subdivide it into subsets that are (in some way) equivalent. Then you test a representative from each subset.

Downtime

Total period that a service or component is not operational.

Document review

See review.

Driver

See test driver

DSDM

Dynamic Systems Development Method. An iterative development approach.

Dynamic testing

Testing performed while the system is running. Dynamic testing involves working with the software, giving input values and checking if the output is as expected.

Dynamic Analysis

The examination of the physical response from the system to variables that are not constant and change with time.

E

Emulator

A device that duplicates (provides an emulation of) the functions of one system using a different system, so that the second system behaves like (and appears to be) the first system. This focus on exact reproduction of external behavior is in contrast to simulation, which can concern an abstract model of the system being simulated, often considering internal state.

Endurance Testing

Checks for memory leaks or other problems that may occur with prolonged execution.

End-to-end testing

Testing used to test whether the performance of an application from start to finish conforms with the behavior that is expected from it. This technique can be used to identify system dependencies and confirm the integrity of data transfer across different system components remains.

Entry criteria

Criteria that must be met before you can initiate testing, such as that the test cases and test plans are complete.

Entry Point

The first executable statement within a component.

Equivalence Class

A mathematical concept, an equivalence class is a subset of given set induced by an equivalence relation on that given set. (If the given set is empty, then the equivalence relation is empty, and there are no equivalence classes; otherwise, the equivalence elation and its concomitant equivalence classes are all non-empty.). Elements of an equivalence class are said to be equivalent, under the equivalence relation, to all the other elements of the same equivalence class.

Equivalence partitioning

A test design technique based on the fact that data in a system is managed in classes, such as intervals. Because of this, you only need to test a single value in every equivalence class. For example, you can assume that a calculator performs all addition operations in the same way; so if you test one addition operation, you have tested the entire equivalence class.

Equivalence Partition Coverage

The percentage of equivalence classes generated for the component, which has been tested.

Equivalence Partition Testing

A test case design technique for a component in which test cases are designed to execute representatives from equivalence classes.

Error

A human action that produces an incorrect result.

Error description

The section of a defect report where the tester describes the test steps he/she performed, what the outcome was, what result he/she expected, and any additional information that will assist in troubleshooting.

Error guessing

Experience-based test design technique where the tester develops test cases based on his/her skill and intuition, and experience with similar systems and technologies.

Error Seeding

The process of injecting a known number of “dummy” defects into the program and then check how many of them are found by various inspections and testing. If, for example, 60% of them are found, the presumption is that 60% of other defects have been found as well.

Evaluation Report

A document produced at the end of the test process summarizing all testing activities and results. It also contains an evaluation of the test process and lessons learned.

Executable statement

A statement which, when compiled, is translated into object code, which will be executed procedurally when the program is running and may perform an action on program data.

Exercised

A program element is exercised by a test case when the input value causes the execution of that element, such as a statement, branch, or other structural element.

Execute

Run, conduct. When a program is executing, it means that the program is running. When you execute or conduct a test case, you can also say that you are running the test case.

Exhaustive testing

A test approach in which you test all possible inputs and outputs.

Exit criteria

Criteria that must be fulfilled for testing to be considered complete, such as that all high-priority test cases are executed, and that no open high-priority defect remains. Also known as completion criteria.

Expected result

A description of the test object’s expected status or behavior after the test steps are completed. Part of the test case.

Exit Point

The last executable statement within a component.

Expert System

A domain specific knowledge base combined with an inference engine that processes knowledge encoded in the knowledge base to respond to a user’s request for advice.

Expertise

Specialized domain knowledge, skills, tricks, shortcuts and rules-of-thumb that provide an ability to rapidly and effectively solve problems in the problem domain.

Exploratory testing

A test design technique based on the tester’s experience; the tester creates the tests while he/she gets to know the system and executes the tests.

External supplier

A supplier/vendor that doesn’t belong to the same organization as the client/buyer.

Extreme programming

An agile development methodology that emphasizes the importance of pair programming, where two developers write program code together. The methodology also implies frequent deliveries and automated testing.

F

Factory acceptance test (FAT)

Acceptance testing carried out at the supplier’s facility, as opposed to a site acceptance test, which is conducted at the client’s site.

Failure

Deviation of the component or system under test from its expected result.

Fault

A manifestation of an error in software. Also known as a bug.

Fault Injection

A technique used to improve test coverage by deliberately inserting faults to test different code paths, especially those that handle errors and which would otherwise be impossible to observe.

Feasible Path

A path for which there exists a set of input values and execution conditions which causes it to be executed.

Feature Testing

A method of testing which concentrates on testing one feature at a time.

Firing a Rule

A rule fires when the “if” part (premise) is proven to be true. If the rule incorporates an “else” component, the rule also fires when the “if” part is proven to be false.

Fit For Purpose Testing

Validation carried out to demonstrate that the delivered system can be used to carry out the tasks for which it was designed and acquired.

Forward Chaining

Applying a set of previously determined facts to the rules in a knowledge base to see if any of them will fire.

Formal review

A review that proceeds according to a documented review process that may include, for example, review meetings, formal roles, required preparation steps, and goals. Inspection is an example of a formal review.

Full Release

All components of the release unit that are built, tested, distributed and implemented together.

See also delta release.

Functional integration

An integration testing strategy in which the system is integrated one function at a time. For example, all the components needed for the “search customer” function are put together and tested one by one.

Functional Specification

The document that describes in detail the characteristics of the product with regard to its intended capability.

Functional Decomposition

A technique used during planning, analysis and design; creates a functional hierarchy for the software. Functional Decomposition broadly relates to the process of resolving a functional relationship into its constituent parts in such a way that the original function can be reconstructed (i.e., recomposed) from those parts by function composition. In general, this process of decomposition is undertaken either for the purpose of gaining insight into the identity of the constituent components (which may reflect individual physical processes of interest, for example), or for the purpose of obtaining a compressed representation of the global function, a task which is feasible only when the constituent processes possess a certain level of modularity (i.e. independence or non-interaction).

Functional Requirements

Define the internal workings of the software: that is, the calculations, technical details, data manipulation and processing and other specific functionality that show how the use cases are to be satisfied. They are supported by non-functional requirements, which impose constraints on the design or implementation (such as performance requirements, security, quality standards, or design constraints).

Functional Specification

A document that describes in detail the characteristics of the product with regard to its intended features.

Functional testing

Testing of the system’s functionality and behavior; the opposite of non-functional testing.

G

Genetic Algorithms

Search procedures that use the mechanics of natural selection and natural genetics. It uses evolutionary techniques, based on function optimization and artificial intelligence, to develop a solution.

Glass Box Testing

Also known as white box testing, a form of testing in which the tester can examine the design documents and the code as well as analyze and possibly manipulate the internal state of the entity being tested. Glass box testing involves examining the design documents and the code, as well as observing at run time the steps taken by algorithms and their internal data.

Goal

The solution that the program or project is trying to reach.

Gorilla Testing

An intense round of testing, quite often redirecting all available resources to the activity. The idea here is to test as much of the application in as short a period of time as possible.

Graphical User Interface (GUI)

A type of display format that enables the user to choose commands, start programs, and see lists of files and other options by pointing to pictorial representations (icons) and lists of menu items on the screen.

Gray-box testing

Testing which uses a combination of white box and black box testing techniques to carry out software debugging on a system whose code the tester has limited knowledge of.

H

Harness

A test environment comprised of stubs and drivers needed to conduct a test.

Heuristics

The informal, judgmental knowledge of an application area that constitutes the “rules of good judgment” in the field. Heuristics also encompass the knowledge of how to solve problems efficiently and effectively, how to plan steps in solving a complex problem, how to improve performance, etc.

High Order Tests

High-order testing checks that the software meets customer requirements and that the software, along with other system elements, meets the functional, behavioral, and performance requirements. It uses black-box techniques and requires an outsider perspective. Therefore, organizations often use an Independent Testing Group (ITG) or the users themselves to perform high-order testing. High-order testing includes validation testing, system testing (focusing on aspects such as reliability, security, stress, usability, and performance), and acceptance testing (includes alpha and beta testing). The testing strategy specifies the type of high-order testing that the project requires. This depends on the aspects that are important in a particular system from the user perspective

I

IEEE 829

An international standard for test documentation published by the IEEE organization. The full name of the standard is IEEE Standard for Software Test Documentation. It includes templates for the test plan, various test reports, and handover documents.

Impact analysis

Techniques that help assess the impact of a change. Used to determine the choice and extent of regression tests needed.

Implementation Testing

See Installation Testing.

Incremental Testing

Partial testing of an incomplete product. The goal of incremental testing is to provide an early feedback to software developers.

Incident

A condition that is different from what is expected, such a deviation from requirements or test cases.

Incident report

See defect report.</>

Independence

Separation of responsibilities which ensures the accomplishment of objective evaluation.

Independent testing

A type of testing in which testers’ responsibilities are divided up in order to maintain their objectivity. One way to do this is by giving different roles the responsibility for various tests. You can use different sets of test cases to test the system from different points of view.

Independent Test Group (ITG)

A group of people whose primary responsibility is to conduct software testing for other companies.

Infeasible path

A path which cannot be exercised by any set of possible input values. Inference: Forming a conclusion from existing facts.

Inference Engine

Software that provides the reasoning mechanism in an expert system. In a rule based expert system, typically implements forward chaining and backward chaining strategies.

Infrastructure

The organizational artifacts needed to perform testing, consisting of test environments, automated test tools, office environment and procedures.

Inheritance

The ability of a class to pass on characteristics and data to its descendants. Input: A variable (whether stored within a component or outside it) that is read by the component.

Input Domain

The set of all possible inputs.

Informal review

A review that isn’t based on a formal procedure.

Inspection

An example of a formal review technique.

Installability

The ability of a software component or system to be installed on a defined target platform allowing it to be run as required. Installation includes both a new installation and an upgrade.

Installability Testing

Testing whether the software or system installation being tested meets predefined installation requirements.

Installation Guide

Supplied instructions on any suitable media, which guides the installer through the installation process. This may be a manual guide, step-by-step procedure, installation wizard, or any other similar process description.

Installation test

A type of test meant to assess whether the system meets the requirements for installation and uninstallation. This could include verifying that the correct files are copied to the machine and that a shortcut is created in the application menu.

Installation Wizard

Supplied software on any suitable media, which leads the installer through the installation process. It shall normally run the installation process, provide feedback on installation outcomes and prompt for options.

Instrumentation

The insertion of additional code into the program in order to collect information about program behavior during program execution.

Instrumentation code

Code that makes it possible to monitor information about the system’s behavior during execution. Used when measuring code coverage, for example.

Integration

The process of combining components into larger groups or assemblies.

Integration testing

A test level meant to show that the system’s components work with one another. The goal is to find problems in interfaces and communication between components.

Internal supplier

Developer that belongs to the same organization as the client. The IT department is usually the internal supplier.

Interface Testing

Integration testing where the interfaces between system components are tested.

Isolation Testing

Component testing of individual components in isolation from surrounding components

ISTQB

International Software Testing Qualifications Board. ISTQB is responsible for international programs for testing certification.

Iteration

A development cycle consisting of a number of phases, from formulation of requirements to delivery of part of an IT system. Common phases are analysis, design, development, and testing. The practice of working in iterations is called iterative development.

J

JUnit

A framework for testing Java applications, specifically designed for automated testing of Java components.

K

KBS (Knowledge Based System)

A domain specific knowledge base combined with an inference engine that processes knowledge encoded in the knowledge base to respond to a user’s request for advice.

Key Performance Indicator

Quantifiable measurements against which specific performance criteria can be set.

Keyword Driven Testing

An approach to test script writing aimed at code based automation tools that separates much of the programming work from the actual test steps. The results is the test steps can be designed earlier and the code base is often easier to read and maintain.

Knowledge Engineering

The process of codifying an expert’s knowledge in a form that can be accessed through an expert system.

Known Error

An incident or problem for which the root cause is known and for which a temporary Work-around or a permanent alternative has been identified.

L

LCSAJ

A Linear Code Sequence And Jump, consisting of the following three items (conventionally identified by line numbers in a source code listing): the start of the linear sequence of executable statements, the end of the linear sequence, and the target line to which control flow is transferred at the end of the linear sequence.

LCSAJ Coverage

The percentage of LCSAJs of a component which are exercised by a test case suite.

LCSAJ Testing

A test case design technique for a component in which test cases are designed to execute LCSAJs.

Logic-Coverage Testing

Sometimes referred to as Path Testing, logic-coverage testing attempts to expose software defects by exercising a unique combination of the program’s statements known as a path.

Load testing

A type of performance testing conducted to evaluate the behavior of a component or system with increasing load, e.g. numbers of concurrent users and/or numbers of transactions. Used to determine what load can be handled by the component or system.

Localization Testing

This term refers to making software specifically designed for a specific locality. This test is based on the results of globalization testing, which verifies the functional support for that particular culture/locale. Localization testing can be executed only on the localized version of a product.

Log

A chronological record of relevant details about the execution of tests.

Loop Testing

Loop testing is the testing of a resource or resources multiple times under program control.

M

Maintainability

A measure of how easy a given piece of software code is to modify in order to correct defects, improve or add functionality.

Maintenance

Activities for managing a system after it has been released in order to correct defects or to improve or add functionality. Maintenance activities include requirements management, testing, development amongst others.

Maintenance Requirements

A specification of the required maintenance needed for the system/software. The released software often needs to be revised and/or upgraded throughout its lifecycle. Therefore it is essential that the software can be easily maintained, and any errors found during re-work and upgrading.

Manual Testing

The oldest type of software testing. Manual testing requires a tester to perform manual test operations on the test software without the help of test automation. Manual testing is a laborious activity that requires the tester to possess a certain set of qualities; to be patient, observant, speculative, creative, innovative, open-minded, resourceful, un-opinionated, and skillful.

Metric

A standard of measurement. Software metrics are the statistics describing the structure or content of a program. A metric should be a real objective measurement of something such as number of bugs per lines of code.

Modified Condition/Decision Coverage

The percentage of all branch condition outcomes that independently affect a decision outcome that have been exercised by a test case suite.

Modified Condition/Decision Testing

A test case design technique in which test cases are designed to execute branch condition outcomes that independently affect a decision outcome.

Monkey Testing

Testing a system or an application on the fly, i.e. a unit test with no specific end result in mind.

Module testing

See component testing.

Multiple Condition Coverage

See Branch Condition Combination Coverage.

Mutation Analysis

A method to determine test case suite thoroughness by measuring the extent to which a test case suite can discriminate the program from slight variants (mutants) of the program.

Mutation Testing

Testing done on the application where bugs are purposely added to it.

MTBF

Mean time between failures. The average time between failures of a system.

N

N-switch Coverage

The percentage of sequences of N-transitions that have been tested.

N-switch Testing

A form of state transition testing in which test cases are designed to execute all valid sequences of N-transitions.

N-transitions

A sequence of N+ transitions. N+1 Testing: A variation of regression testing. Testing conducted with multiple cycles in which errors found in test cycle N are resolved and the solution is retested in test cycle N+1. The cycles are typically repeated until the solution reaches a steady state and there are no errors.

Naming standard

The standard for creating names for variables, functions, and other parts of a program. For example, strName, sName and Name are all technically valid names for a variable, but if you don’t adhere to one structure as the standard, maintenance will be very difficult.

Negative testing

A type of testing intended to show that the system works well even if it is not used correctly. For example, if a user enters text in a numeric field, the system should not crash.

Neural Network

A system modeled after the neurons (nerve cells) in a biological nervous system. A neural network is designed as an interconnected system of processing elements, each with a limited number of inputs and outputs. Rather than being programmed, these systems learn to recognize patterns.

Non-functional Requirements Testing

Testing of those requirements that do not relate to functionality. i.e. performance, usability, etc.

Non-functional testing

Testing of non-functional aspects of the system, such as usability, reliability, maintainability, and performance.

Normalization

A technique for designing relational database tables to minimize duplication of information and, in so doing, to safeguard the database against certain types of logical or structural problems, namely data anomalies.

NUnit

An open source framework for automated testing of components in Microsoft .Net applications.

O

Object

A software structure which represents an identifiable item that has a well-defined role in a problem domain.

Object Orientated

An adjective applied to any system or language that supports the use of objects.

Objective

The purpose of the specific test being undertaken.

Open source

A form of licensing in which software is offered free of charge. Open source software is frequently available via download from the internet, from www.sourceforge.net for example.

Operational testing

Tests carried out when the system has been installed in the operational environment (or simulated operational environment) and is otherwise ready to go live. Intended to test operational aspects of the system, e.g. recoverability, co-existence with other systems and resource consumption.

Oracle

A mechanism to produce the predicted outcomes to compare with the actual outcomes of the software under test.

Outcome

The result after a test case has been executed.

Output

A variable (whether stored within a component or outside it) that is written to by the component.

Output Domain

The set of all possible outputs.

Output Value

An instance of an output.

P

Page Fault

A program interruption that occurs when a page that is marked ‘not in real memory’ is referred to by an active page

Pair programming

A software development approach where two developers sit together at one computer while programming a new system. While one developer codes, the other makes comments and observations, and acts as a sounding board. The technique has been shown to lead to higher quality thanks to the de facto continuous code review – bugs and errors are avoided because the team catches them as the code is written.

Pair testing

Test approach where two persons, e.g. two testers, a developer and a tester, or an end-user and a tester, work together to find defects. Typically, they share one computer and trade control of it while testing. One tester can act as observer when the other performs tests.

Pairwise Testing

A combinatorial software testing method that, for each pair of input parameters to a system (typically, a software algorithm) tests all possible discrete combinations of those parameters. Using carefully chosen test vectors, this can be done much faster than an exhaustive search of all combinations of all parameters, by “parallelizing” the tests of parameter pairs. The number of tests is typically O(nm), where n and m are the number of possibilities for each of the two parameters with the most choices.

Partial Test Automation

The process of automating parts but not all of the software testing process. If, for example, an oracle cannot reasonably be created, or if fully automated tests would be too difficult to maintain, then a software tools engineer can instead create testing tools to help human testers perform their jobs more efficiently. Testing tools can help automate tasks such as product installation, test data creation, GUI interaction, problem detection (consider parsing or polling agents equipped with oracles), defect logging, etc., without necessarily automating tests in an end-to-end fashion.

Pass

Software has deemed to have passed a test if the actual results of the test matched the expected results.

Pass/Fail Criteria

Decision rules used to determine whether an item under test has passed or failed a test.

Path

A sequence of executable statements of a component, from an entry point to an exit point.

Path Coverage

The percentage of paths in a component exercised by a test case suite.

Path Sensitizing

Choosing a set of input values to force the execution of a component to take a given path.

Path Testing

Used as either black box or white box testing, the procedure itself is similar to a walk-through. First, a certain path through the program is chosen. Possible inputs and the correct result are written down. Then the program is executed by hand, and its result is compared to the predefined. Possible faults have to be written down at once.

Performance

The degree to which a system or component accomplishes its designated functions within given constraints regarding processing time and throughput rate.

Performance testing

A test to evaluate whether the system meets performance requirements such as response time or transaction frequency.

Portability

The ease with which the system/software can be transferred from one hardware or software environment to another.

Portability Requirements

A specification of the required portability for the system/software.

Portability Testing

The process of testing the ease with which a software component can be moved from one environment to another. This is typically measured in terms of the maximum amount of effort permitted. Results are expressed in terms of the time required to move the software and complete data conversion and documentation updates.

Positive testing

A test aimed to show that the test object works correctly in normal situations. For example, a test to show that the process of registering a new customer functions correctly when using valid test data.

Post-conditions

Environmental and state conditions that must be fulfilled after a test case or test run has been executed.

Preconditions

Environmental and state conditions that must be fulfilled before the component or system can be tested. May relate to the technical environment or the status of the test object. Also known as prerequisites or preparations.

Prerequisites

See preconditions.

Predicate

A logical expression which evaluates to TRUE or FALSE, normally to direct the execution path in code.

Predication

The choice to execute or not to execute a given instruction.

Predicted Outcome

The behavior expected by the specification of an object under specified conditions.

Predication

The choice to execute or not to execute a given instruction.

Predicted Outcome

The behavior expected by the specification of an object under specified conditions.

Priority

The level of importance assigned to e.g. a defect.

Professional tester

A person whose sole job is testing.

Program testing

See component testing.

Process

A course of action which turns inputs into outputs or results

Process Cycle Test

A black box test design technique in which test cases are designed to execute business procedures and processes.

Progressive Testing

Testing of new features after regression testing of previous features.

Project

A planned undertaking for presentation of results at a specified time in the future.

Prototyping

A strategy in system development in which a scaled down system or portion of a system is constructed in a short time, then tested and improved upon over several iterations.

Pseudo-Random

A series which appears to be random but is in fact generated according to some prearranged sequence.

Q

Quality

The degree to which a component, system or process meets specified requirements and/or user/customer needs and expectations.

Quality assurance (QA)

Systematic monitoring and evaluation of various aspects of a component or system to maximize the probabili

Show more