What is Software testing??
Testing is a process used to help identify the correctness, completeness and quality of developed computer software. With that in mind, testing can never completely establish the correctness of computer software.
There are many approaches to software testing, but effective
testing of complex products is essentially a process of investigation, not
merely a matter of creating and following rote procedure. One definition of testing is "the process of
questioning a product in order to evaluate it", where the "questions" are things
the tester tries to do with the product, and the product answers with its
behavior in reaction to the probing of the tester. Although most of the
intellectual processes of testing are nearly identical to that of review or
inspection, the word testing is connoted to mean the dynamic analysis of the
product—putting the product through its paces.
Testing helps is Verifying and Validating if the Software is
working as it is intended to be working. Thins involves using Static and Dynamic
methodologies to Test the application.
Because of the fallibility of its human designers and its own
abstract, complex nature, software development must be accompanied by quality
assurance activities. It is not unusual for developers to spend 40% of the total
project time on testing. For life-critical software (e.g. flight control,
reactor monitoring), testing can cost 3 to 5 times as much as all other
activities combined. The destructive nature of testing requires that the
developer discard preconceived notions of the correctness of his/her developed
software.
Software Testing Fundamentals
Testing objectives include
1. Testing is a process of executing a program with the intent of finding an error.
2. A good test case is one that has a high probability of finding an as yet undiscovered error.
3. A successful test is one that uncovers an as yet undiscovered error.
Testing should systematically uncover different classes of errors in a minimum amount of time and with a minimum amount of effort. A secondary benefit of testing is that it demonstrates that the software appears to be working as stated in the specifications. The data collected through testing can also provide an indication of the software's reliability and quality. But, testing cannot show the absence of defect -- it can only show that software defects are present.
1. Testing is a process of executing a program with the intent of finding an error.
2. A good test case is one that has a high probability of finding an as yet undiscovered error.
3. A successful test is one that uncovers an as yet undiscovered error.
Testing should systematically uncover different classes of errors in a minimum amount of time and with a minimum amount of effort. A secondary benefit of testing is that it demonstrates that the software appears to be working as stated in the specifications. The data collected through testing can also provide an indication of the software's reliability and quality. But, testing cannot show the absence of defect -- it can only show that software defects are present.
Software Testing Types:
Black box testing – Internal system design is not considered
in this type of testing. Tests are based on requirements and functionality.
White box testing – This testing is based on knowledge of
the internal logic of an application’s code. Also known as Glass box Testing.
Internal software and code working should be known for this type of testing.
Tests are based on coverage of code statements, branches, paths, conditions.
Unit testing – Testing of individual software components or
modules. Typically done by the programmer and not by testers, as it requires
detailed knowledge of the internal program design and code. may require
developing test driver modules or test harnesses.
Incremental integration testing – Bottom up approach for
testing i.e continuous testing of an application as new functionality is added;
Application functionality and modules should be independent enough to test
separately. done by programmers or by testers.
Integration testing – Testing of integrated modules to
verify combined functionality after integration. Modules are typically code
modules, individual applications, client and server applications on a network,
etc. This type of testing is especially relevant to client/server and
distributed systems.
Functional testing – This type of testing ignores the
internal parts and focus on the output is as per requirement or not. Black-box
type testing geared to functional requirements of an application.
System testing – Entire system is tested as per the
requirements. Black-box type testing that is based on overall requirements
specifications, covers all combined parts of a system.
End-to-end testing – Similar to system testing, involves
testing of a complete application environment in a situation that mimics
real-world use, such as interacting with a database, using network
communications, or interacting with other hardware, applications, or systems if
appropriate.
Sanity testing - Testing to determine if a new software
version is performing well enough to accept it for a major testing effort. If
application is crashing for initial use then system is not stable enough for
further testing and build or application is assigned to fix.
Regression testing – Testing the application as a whole for
the modification in any module or functionality. Difficult to cover all the
system in regression testing so typically automation tools are used for these
testing types.
Acceptance testing -Normally this type of testing is done to
verify if system meets the customer specified requirements. User or customer do
this testing to determine whether to accept application.
Load testing – Its a performance testing to check system
behavior under load. Testing an application under heavy loads, such as testing
of a web site under a range of loads to determine at what point the system’s
response time degrades or fails.
Stress testing – System is stressed beyond its
specifications to check how and when it fails. Performed under heavy load like
putting large number beyond storage capacity, complex database queries,
continuous input to system or database load.
Performance testing – Term often used interchangeably with
‘stress’ and ‘load’ testing. To check whether system meets performance
requirements. Used different performance and load tools to do this.
Usability testing – User-friendliness check. Application
flow is tested, Can new user understand the application easily, Proper help
documented whenever user stuck at any point. Basically system navigation is
checked in this testing.
Install/uninstall testing - Tested for
full, partial, or upgrade install/uninstall processes on different operating
systems under different hardware, software environment.
Recovery testing – Testing how well a system recovers from
crashes, hardware failures, or other catastrophic problems.
Security testing – Can system be penetrated by any hacking
way. Testing how well the system protects against unauthorized internal or
external access. Checked if system, database is safe from external attacks.
Compatibility testing – Testing how well software performs
in a particular hardware/software/operating system/network environment and
different combination s of above.
Comparison testing – Comparison of product strengths and
weaknesses with previous versions or other similar products.
Alpha testing – In house virtual user environment can be
created for this type of testing. Testing is done at the end of development.
Still minor design changes may be made as a result of such testing.
Beta testing – Testing typically done by end-users or
others. Final testing before releasing application for commercial purpose.
Software Testing Dictionary
Acceptance
Test: Formal tests (often performed by a
customer) to determine whether or not a system has satisfied predetermined
acceptance criteria. These tests are often used to enable the customer (either
internal or external) to determine whether or not to accept a system.
Ad Hoc Testing: Testing carried out using no recognized test case design
techniqueAlpha Testing: Testing of a software product or system
conducted at the developer’s site by the customer.
Assertion Testing. (NBS): A dynamic analysis technique which inserts assertions about
the relationship between program variables into the program code. The truth of
the assertions is determined as the program executes.
Automated Testing Software testing which is assisted with software technology
that does not require operator (tester) input, analysis, or evaluation.
Background testing is the execution of normal functional testing while the SUT
is exercised by a realistic work load. This work load is being processed “in
the background” as far as the functional testing is concerned.
Bug: glitch, error, goof, slip, fault, blunder, boner, howler,
oversight, botch, delusion, elision. [B. Beizer, 1990], defect, issue, problem
Beta Testing: Testing conducted at one or more customer sites by the
end-user of a delivered software product or system.
Benchmarks: Programs that provide performance comparison for software,
hardware, and systems.
Benchmarking is specific type of performance test with the purpose of
determining performance baselines for comparison.
Big-bang testing: Integration testing where no incremental testing takes place
prior to all the system's components being combined to form the system
Black box testing: A testing method where the application under test is viewed
as a black box and the internal behavior of the program is completely ignored.
Testing occurs based upon the external specifications. Also known as behavioral
testing, since only the external behaviors of the program are evaluated and
analyzed.
Boundary Value Analysis (BVA): BVA is different from equivalence partitioning in
that it focuses on “corner cases” or values that are usually out of range as
defined by the specification. This means that if function expects all values in
range of negative 100 to positive 1000, test inputs would include negative 101
and positive 1001. BVA attempts to derive the value often used as a technique
for stress, load or volume testing. This type of validation is usually
performed after positive functional validation has completed (successfully)
using requirements specifications and user documentation.
Breadth test : A test suite that exercises the full scope of a
system from a top-down perspective, but does not test any aspect in detail
Cause Effect Graphing: (1) [NBS] Test data selection technique. The input and
output domains are partitioned into classes and analysis is performed to
determine which input classes cause which effect. A minimal set of inputs is
chosen which will cover the entire effect set. (2)A systematic method of
generating test cases representing combinations of conditions. See: testing,
functional.
Clean test: A test whose primary purpose is validation; that is, tests
designed to demonstrate the software`s correct working.(syn. positive test)
Code Inspection: A manual [formal] testing [error detection] technique where
the programmer reads source code, statement by statement, to a group who ask
questions analyzing the program logic, analyzing the code with respect to a
checklist of historically common programming errors, and analyzing its compliance
with coding standards. Contrast with code audit, code review, code walkthrough.
This technique can also be applied to other software and configuration items.
[G.Myers/NBS] Syn: Fagan Inspection
Code Walkthrough: A manual testing [error detection] technique where program
logic [structure] is traced manually [mentally] by a group with a small set of
test cases, while the state of program variables is manually monitored, to
analyze the programmer’s logic and assumptions.[G.Myers/NBS] Contrast with code
audit, code inspection, code review.
Coexistence Testing: Coexistence isn稚 enough. It also depends on load
order, how virtual space is mapped at the moment, hardware and software
configurations, and the history of what took place hours or days before. It痴
probably an exponentially hard problem rather than a square-law problem.
Compatibility bug : A revision to the framework breaks a previously
working feature: a new feature is inconsistent with an old feature, or a new
feature breaks an unchanged application rebuilt with the new framework code.
Compatibility Testing: The process of determining the ability of two or more
systems to exchange information. In a situation where the developed software
replaces an already working program, an investigation should be conducted to
assess possible comparability problems between the new software and other
programs or systems.
Composability testing : Testing the ability of the interface to let users do more
complex tasks by combining different sequences of simpler, easy-to-learn tasks.
Condition Coverage: A test coverage criteria requiring enough test cases such
that each condition in a decision takes on all possible outcomes at least once,
and each point of entry to a program or subroutine is invoked at least once.
Contrast with branch coverage, decision coverage, multiple condition coverage,
path coverage, statement coverage.[G.Myers]
Conformance directed testing: Testing that seeks to establish conformance to requirements
or specification.
CRUD Testing: Build CRUD matrix and test all object creation, reads,
updates, and deletion.
Data-Driven testing: An automation approach in which the navigation and
functionality of the test script is directed through external data; this
approach separates test and control data from the test script. [Daniel J.
Mosley, 2002]
Data flow testing: Testing in which test cases are designed based on variable
usage within the code.
Database testing: Check the integrity of database field values. [William E.
Lewis, 2000]
Defect The difference between the functional specification
(including user documentation) and actual program text (source code and data).
Often reported as problem and stored in defect-tracking and problem-management
system
Defect : Also called a fault or a bug, a defect is an
incorrect part of code that is caused by an error. An error of commission
causes a defect of wrong or extra code. An error of omission results in a
defect of missing code. A defect may cause one or more failures.
Depth test: A test case, that exercises some part of a system to a significant
level of detail.
Decision Coverage: A test coverage criteria requiring enough test cases such
that each decision has a true and false result at least once, and that each
statement is executed at least once. Syn: branch coverage. Contrast with condition
coverage, multiple condition coverage, path coverage, statement coverage.
Dirty testing : Negative testing.
Dynamic testing: Testing, based on specific test cases, by execution of the
test object or running programs
End-to-End testing: Similar to system testing; the ‘macro’ end of the test
scale; involves testing of a complete application environment in a situation
that mimics real-world use, such as interacting with a database, using network
communications, or interacting with other hardware, applications, or systems if
appropriate.
Equivalence Partitioning: An approach where classes of inputs are categorized for
product or function validation. This usually does not include combinations of
input, but rather a single state value based by class. For example, with a
given function there may be several classes of input that may be used for
positive testing. If function expects an integer and receives an integer as
input, this would be considered as positive test assertion. On the other hand,
if a character or any other input class other than integer is provided, this
would be considered a negative test assertion or condition.
Error: An error is a mistake of commission or omission that a
person makes. An error causes a defect. In software development one error may
cause one or more defects in requirements, designs, programs, or tests.
Errors: The amount by which a result is incorrect. Mistakes are
usually a result of a human action. Human mistakes (errors) often result in
faults contained in the source code, specification, documentation, or other
product deliverable. Once a fault is encountered, the end result will be a
program failure. The failure usually has some margin of error, either high,
medium, or low.
Error Guessing: Another common approach to black-box validation. Black-box
testing is when everything else other than the source code may be used for
testing. This is the most common approach to testing. Error guessing is when
random inputs or conditions are used for testing. Random in this case includes
a value either produced by a computerized random number generator, or an ad hoc
value or test conditions provided by engineer.
Error guessing: A test case design technique where the experience of the
tester is used to postulate what faults exist, and to design tests specially to
expose them
Error seeding: The purposeful introduction of faults into a program to
test effectiveness of a test suite or other quality assurance program.
Exception Testing: Identify error messages and exception handling processes an
conditions that trigger them.
Exhaustive Testing.(NBS): Executing the program with all possible combinations of
values for program variables. Feasible only for small, simple programs.
Exploratory Testing: An interactive process of concurrent product exploration,
test design, and test execution. The heart of exploratory testing can be stated
simply: The outcome of this test influences the design of the next test.
Failure: A failure is a deviation from expectations exhibited by
software and observed as a set of symptoms by a tester or user. A failure is
caused by one or more defects. The Causal Trail. A person makes an error that
causes a defect that causes a failure.[Robert M. Poston, 1996]
Follow-up testing: we vary a test that yielded a less-thanspectacular failure.
We vary the operation, data, or environment, asking whether the underlying
fault in the code can yield a more serious failure or a failure under a broader
range of circumstances.
Formal Testing. (IEEE): Testing conducted in accordance with test plans and
procedures that have been reviewed and approved by a customer, user, or
designated level of management. Antonym: informal testing
Free Form Testing: Ad hoc or brainstorming using intuition to define test
cases.
Functional Decomposition Approach: An automation method in which the test cases are reduced to
fundamental tasks, navigation, functional tests, data verification, and return
navigation; also known as Framework Driven Approach. [Daniel J. Mosley, 2002]
Functional testing: Application of test data derived from the specified
functional requirements without regard to the final program structure. Also
known as black-box testing.
Gray box
testing: Tests involving inputs and outputs,
but test design is educated by information about the code or the program
operation of a kind that would normally be out of scope of view of the tester.
Gray box testing :Test designed based on the knowledge of algorithm,
internal states, architectures, or other high -level descriptions of the
program behavior. [Doug Hoffman]
Gray box testing: Examines the activity of back-end components during test
case execution. Two types of problems that can be encountered during gray-box
testing are A component encounters a failure of some kind, causing the
operation to be aborted. The user interface will typically indicate that an
error has occurred.
The test executes in full, but the content of the results is incorrect. Somewhere in the system, a component processed data incorrectly, causing the error in the results.
The test executes in full, but the content of the results is incorrect. Somewhere in the system, a component processed data incorrectly, causing the error in the results.
Inspection:
A formal evaluation technique in
which software requirements, design, or code are examined in detail by person
or group other than the author to detect faults, violations of development
standards, and other problems. A quality improvement process for written
material that consists of two dominant components: product (document)
improvement and process improvement (document production and inspection).
Integration: The process of combining software components or hardware
components or both into overall system.
Integration testing: testing of combined parts of an application to determine if
they function together correctly. The ‘parts’ can be code modules, individual
applications, client and server applications on a network, etc. This type of
testing is especially relevant to client/server and distributed systems.
Integration Testing: Testing conducted after unit and feature testing. The
intent is to expose faults in the interactions between software modules and
functions. Either top-down or bottom-up approaches can be used. A bottom-up
method is preferred, since it leads to earlier unit testing (step-level
integration) This method is contrary to the big-band approach where all source
modules are combined and tested in one step. The big-band approach to
integration should be discouraged.
Interface Tests: Programs that provide test facilities for external
interfaces and function calls. Simulation is often used to test external
interfaces that currently may not be available for testing or are difficult to
control. For example, hardware resources such as hard disks and memory may be
difficult to control. Therefore, simulation can provide the characteristics or
behaviors for specific function.
Internationalization testing (I18N): testing related to handling foreign text and data within
the program. This would include sorting, importing and exporting test and data,
correct handling of currency and date and time formats, string parsing, upper
and lower case handling and so forth.
Interoperability Testing: which measures the ability of your software to communicate
across the network on multiple machines from multiple vendors each of whom may
have interpreted a design specification critical to your success differently.
Inter-operability Testing: True inter-operability testing concerns testing for
unforeseen interactions with other packages with which your software has no
direct connection. In some quarters, inter-operability testing labor equals all
other testing combined. This is the kind of testing that I say shouldn稚
be done because it can be done.[from Quality Is Not The Goal. By Boris Beizer, Ph.
D.]
Lateral testing: A test design technique based on lateral thinking
principals, to identify faults. [Dorothy Graham, 1999]
Load testing: Testing an application under heavy loads, such as testing of
a web site under a range of loads to determine at what point the system’s
response time degrades or fails.
Load stress
test: A test is design to determine how
heavy a load the application can handle.
Load-stability test. Test design to determine whether a Web application will
remain serviceable over extended time span.
Load isolation
test: The workload for this type of test
is designed to contain only the subset of test cases that caused the problem in
previous testing.
Monkey
Testing:(smart monkey testing) Input are
generated from probability distributions that reflect actual expected usage
statistics — e.g., from user profiles. There are different levels of IQ in
smart monkey testing. In the simplest, each input is considered independent of
the other inputs. That is, a given test requires an input vector with five
components. In low IQ testing, these would be generated independently. In high
IQ monkey testing, the correlation (e.g., the covariance) between these input
distribution is taken into account. In all branches of smart monkey testing,
the input is considered as a single event.
Maximum Simultaneous Connection
testing: This is a test performed to
determine the number of connections which the firewall or Web server is capable
of handling.
Mutation testing: A testing strategy where small variations to a program are
inserted (a mutant), followed by execution of an existing test suite. If the
test suite detects the mutant, the mutant is ァメ⌠ retired.ァメ。・If
undetected, the test suite must be revised.
Multiple Condition Coverage: A test coverage criteria which requires enough test cases
such that all possible combinations of condition outcomes in each decision, and
all points of entry, are invoked at least once. Contrast with branch coverage,
condition coverage, decision coverage, path coverage, statement coverage.
Negative
test: A test whose primary purpose is
falsification; that is tests designed to brake the software[B.Beizer1995]
Orthogonal
array testing: Technique can be used to reduce the
number of combination and provide maximum coverage with a minimum number of
TC.Pay attention to the fact that it is an old and proven technique. The OAT
was introduced for the first time by Plackett and Burman in 1946 and was
implemented by G. Taguchi, 1987
Orthogonal array testing: Mathematical technique to determine which variations of
parameters need to be tested.
Oracle: Test Oracle: a mechanism to produce the predicted outcomes
to compare with the actual outcomes of the software under test
Parallel
Testing: Testing a new or an alternate data
processing system with the same source data that is used in another system. The
other system is considered as the standard of comparison. Syn: parallel run.
Penetration testing: The process of attacking a host from outside to ascertain
remote security vulnerabilities.
Performance Testing: Testing conducted to evaluate the compliance of a system or
component with specific performance requirements
Performance testing can be undertaken to: 1) show that the system meets
specified performance objectives, 2) tune the system, 3) determine the factors
in hardware or software that limit the system’s performance, and 4) project the
system’s future load- handling capacity in order to schedule its replacements”
[Software System Testing and Quality Assurance. Beizer, 1984, p. 256]
Prior Defect History Testing: Test cases are created or rerun for every defect found in
prior tests of the system.
Qualification
Testing. (IEEE): Formal testing, usually conducted
by the developer for the consumer, to demonstrate that the software meets its
specified requirements. See: acceptance testing.
Quality. The degree to which a program possesses a desired
combination of attributes that enable it to perform its specified end use.
Quality Assurance (QA) :Consists of planning, coordinating and other strategic
activities associated with measuring product quality against external
requirements and specifications (process-related activities).
Quality Control (QC) :Consists of monitoring, controlling and other tactical
activities associated with the measurement of product quality goals.
Our definition of Quality: Achieving the target (not conformance to requirements as
used by many authors) & minimizing the variability of the system under test
Race
condition defect: Many concurrent defects result from
data-race conditions. A data-race condition may be defined as two accesses to a
shared variable, at least one of which is a write, with no mechanism used by
either to prevent simultaneous access. However, not all race conditions are
defects.
Recovery testing:Testing how well a system recovers from crashes, hardware
failures, or other catastrophic problems.
Regression Testing:Testing conducted for the purpose of evaluating whether or
not a change to the system (all CM items) has introduced a new failure.
Regression testing is often accomplished through the construction, execution
and analysis of product and system tests.
Regression Testing: testing that is performed after making a functional
improvement or repair to the program. Its purpose is to determine if the change
has regressed other aspects of the program
Reengineering:The process of examining and altering an existing system to
reconstitute it in a new form. May include reverse engineering (analyzing a
system and producing a representation at a higher level of abstraction, such as
design from code), restructuring (transforming a system from one representation
to another at the same level of abstraction), recommendation (analyzing a
system and producing user and support documentation), forward engineering
(using software products derived from an existing system, together with new
requirements, to produce a new system), and translation (transforming source
code from one language to another or from one version of a language to
another).
Reference testing: A way of deriving expected outcomes by manually validating
a set of actual outcomes. A less rigorous alternative to predicting expected
outcomes in advance of test execution.
Reliability testing: Verify the probability of failure free operation of a
computer program in a specified environment for a specified time.
Reliability of an object is defined
as the probability that it will not fail under specified conditions, over a
period of time. The specified conditions are usually taken to be fixed, while
the time is taken as an independent variable. Thus reliability is often written
R(t) as a function of time t, the probability that the object will not fail
within time t.
Any computer user would probably
agree that most software is flawed, and the evidence for this is that it does
fail. All software flaws are designed in — the software does not break, rather
it was always broken. But unless conditions are right to excite the flaw, it
will go unnoticed — the software will appear to work properly.
Range Testing: For each input identifies the range over which the system
behavior should be the same.
Risk management:An organized process to identify what can go wrong, to
quantify and access associated risks, and to implement/control the appropriate
approach for preventing or handling each risk identified.
Robust test: A test, that compares a small amount of information, so
that unexpected side effects are less likely to affect whether the test passed
or fails.
Sanity
Testing: typically an initial testing effort
to determine if a new software version is performing well enough to accept it
for a major testing effort. For example, if the new software is often crashing
systems, bogging down systems to a crawl, or destroying databases, the software
may not be in a ‘sane’ enough condition to warrant further testing in its
current state.
Scalability testing is a subtype of performance test where performance
requirements for response time, throughput, and/or utilization are tested as
load on the SUT is increased over time.
Sensitive test: A test, that compares a large amount of information, so
that it is more likely to defect unexpected differences between the actual and
expected outcomes of the test.
Smoke test describes an initial set of tests that determine if a new
version of application performs well enough for further testing.
Specification-based test: A test, whose inputs are derived from a specification.
Spike testing. to test performance or recovery behavior when the system
under test (SUT) is stressed with a sudden and sharp increase in load should be
considered a type of load test.
State-based testing: Testing with test cases developed by modeling the system
under test as a state machine
State Transition Testing: Technique in which the states of a system are fist
identified and then test cases are written to test the triggers to cause a
transition from one condition to another state.
Static testing: Source code analysis. Analysis of source code to expose
potential defects.
Statistical testing: A test case design technique in which a model is used of
the statistical distribution of the input to construct representative test cases.
Stealth bug. A bug that removes information useful for its diagnosis and
correction.
Storage test: Study how memory and space is used by the program, either
in resident memory or on disk. If there are limits of these amounts, storage
tests attempt to prove that the program will exceed them.
Stress / Load / Volume test: Tests that provide a high degree of activity, either using
boundary conditions as inputs or multiple copies of a program executing in
parallel as examples.
Structural Testing: (1)(IEEE) Testing that takes into account the internal
mechanism [structure] of a system or component. Types include branch testing,
path testing, statement testing. (2) Testing to insure each program statement
is made to execute during testing and that each program statement performs its
intended function. Contrast with functional testing. Syn: white-box testing,
glass-box testing, logic driven testing.
System testing: Black-box type testing that is based on overall requirements
specifications; covers all combined parts of a system.
Test Bed. An environment containing the hardware, instrumentation,
simulators, software tools, and other support elements needed to conduct a
test.
Test Case: A set of test inputs, executions, and expected results
developed for a particular objective.
Test conditions: The set of circumstances that a test invokes.
Test Coverage: The degree to which a given test or set of tests addresses
all specified test cases for a given system or component.
Test Criteria: Decision rules used to determine whether software item or
software feature passes or fails a test.
Test data. The actual (set of) values used in the test or that are
necessary to execute the test.
Test Documentation. (IEEE) Documentation describing plans for, or results of, the
testing of a system or component, Types include test case specification, test
incident report, test log, test plan, test procedure, test report.
Test Driver : A software module or application used to invoke a
test item and, often, provide test inputs (data), control and monitor
execution. A test driver automates the execution of test procedures.
Test Harness : A system of test drivers and other tools to
support test execution (e.g., stubs, executable test cases, and test drivers).
See: test driver.
Test Item: A software item which is the object of testing.
Test Log :A chronological record of all relevant details about the
execution of a test.
Test Plan: A high-level document that defines a testing project so that
it can be properly measured and controlled. It defines the test strategy and
organized elements of the test life cycle, including resource requirements,
project schedule, and test requirements
Test Procedure:A document, providing detailed instructions for the [manual]
execution of one or more test cases. Often called – a manual test script.
Test strategy: Describes the general approach and objectives of the test
activities.
Test Status: The assessment of the result of running tests on software.
Test Stub: A dummy software component or object used (during
development and testing) to simulate the behaviour of a real component. The
stub typically provides test output.
Test Suites:A test suite consists of multiple test cases (procedures and
data) that are combined and often managed by a test harness.
Test Tree: A physical implementation of Test Suite.
Testability: Attributes of software that bear on the effort needed for
validating the modified software
Testing: The execution of tests with the intent of providing that the
system and application under test does or does not perform according to the
requirements specification.
Unit
Testing:. Testing performed to isolate and
expose faults and failures as soon as the source code is available, regardless
of the external interfaces that may be required. Oftentimes, the detailed
design and requirements documents are used as a basis to compare how and what
the unit is able to perform. White and black-box testing methods are combined
during unit testing.
Usability testing. Testing for ‘user-friendliness’. Clearly this is
subjective, and will depend on the targeted end-user or customer.
Validation:The comparison between the actual characteristics of
something (e.g. a product of a software project and the expected
characteristics).Validation is checking that you have built the right system.
Verification :The comparison between the actual characteristics
of something (e.g. a product of a software project) and the specified
characteristics.Verification is checking that we have built the system right.
Volume testing: Testing where the system is subjected to large volumes of
data.
Walkthrough: In the most usual form of term, a walkthrough is step by
step simulation of the execution of a procedure, as when walking through code
line by line, with an imagined set of inputs. The term has been extended to the
review of material that is not procedural, such as data descriptions, reference
manuals, specifications, etc.
White Box Testing (glass-box). Testing is done under a structural testing strategy and
require complete access to the object’s structure that is,
the source code
No comments:
Post a Comment