what r the different types of testing techniques?

Answer Posted / ravi mayuri

1. Software Testing Techniques

The importance of software testing and its impact on
software cannot be underestimated. Software testing is a
fundamental component of software quality assurance and
represents a review of specification, design and coding.
The greater visibility of software systems and the cost
associated with software failure are motivating factors for
planning, through testing. It is not uncommon for a
software organization to spent 40% of its effort on testing.

1.1 Software Testing Fundamentals

During testing the software engineering produces a series of
test cases that are used to “rip apart” the software they
have produced. Testing is the one step in the software
process that can be seen by the developer as destructive
instead of constructive. Software engineers are typically
constructive people and testing requires them to overcome
preconceived concepts of correctness and deal with conflicts
when errors are identified.

1.1.1 Testing objectives

A number of rules that act as testing objectives are:

Testing is a process of executing a program with the aim of
finding errors.
A good test case will have a good chance of finding an
undiscovered error.
A successful test case uncovers a new error.

1.1.2 Test information flow

Information flow for testing follows the pattern shown in
the figure below. Two types of input are given to the test
process: (1) a software configuration; (2) a test
configuration. Tests are performed and all outcomes
considered, test results are compared with expected results.
When erroneous data is identified error is implied and
debugging begins. The debugging procedure is the most
unpredictable element of the testing procedure. An “error”
that indicates a discrepancy of 0.01 percent between the
expected and the actual results can take hours, days or
months to identify and correct. It is the uncertainty in
debugging that causes testing to be difficult to schedule
reliability.






Test information flow

1.1.3 Test case design

The design of software testing can be a challenging process.
However software engineers often see testing as an after
thought, producing test cases that feel right but have
little assurance that they are complete. The objective of
testing is to have the highest likelihood of finding the
most errors with a minimum amount of timing and effort. A
large number of test case design methods have been developed
that offer the developer with a systematic approach to
testing. Methods offer an approach that can ensure the
completeness of tests and offer the highest likelihood for
uncovering errors in software.

Any engineering product can be tested in two ways: (1)
Knowing the specified functions that the product has been
designed to perform, tests can be performed that show that
each function is fully operational (2) knowing the internal
workings of a product, tests can be performed to see if they
jell. The first test approach is known as a black box
testing and the second white box testing.

Black box testing relates to the tests that are performed at
the software interface. Although they are designed identify
errors, black box tests are used to demonstrate that
software functions are operational; that inputs are
correctly accepted and the output is correctly produced. A
black box test considers elements of the system with little
interest in the internal logical arrangement of the
software. White box testing of software involves a closer
examination of procedural detail. Logical paths through the
software are considered by providing test cases that
exercise particular sets of conditions and/or loops. The
status of the system can be identified at diverse points to
establish if the expected status matches the actual status.
1.2 White Box Testing

White box testing is a test case design approach that
employs the control architecture of the procedural design to
produce test cases. Using white box testing approaches, the
software engineering can produce test cases that (1)
guarantee that all independent paths in a module have been
exercised at least once, (2) exercise all logical decisions,
(3) execute all loops at their boundaries and in their
operational bounds, (4) exercise internal data structures to
maintain their validity.

1.3 Basis Path Testing

Basic path testing is a white box testing techniques that
allows the test case designer to produce a logical
complexity measure of procedural design and use this measure
as an approach for outlining a basic set of execution paths.
Test cases produced to exercise each statement in the
program at least one time during testing.

1.3.1 Flow Graphs

The flow graph can be used to represent the logical flow
control and therefore all the execution paths that need
testing. To illustrate the use of flow graphs consider the
procedural design depicted in the flow chart below. This is
mapped into the flow graph below where the circles are nodes
that represent one or more procedural statements and the
arrow in the flow graph called edges represent the flow
control. Each node that includes a condition is known as a
predicate node, and has two or more edges coming from it.




































Flow chart


Flow graph


1.3.2Cyclomatic Complexity

As we have seen before McCabe’s cyclomatic drewbarry
complexity is a software metric that offers an indication of
the logical complexity of a program. When used in the
context of the basis path testing approach, the value is
determined for cyclomatic complexity defines the number of
independent paths in the basis set of a program and offer
upper bounds for number of tests that ensures all statements
have been executed at least once. An independent path is
any path through the program that introduces at least one
new group of processing statements or new condition. A set
of independent paths for the example flow graph are:

Path 1: 1-11
Path 2: 1-2-3-4-5-10-1-11
Path 3: 1-2-3-6-8-9-10-11

1.3.3Deriving Test Cases

The basis path testing method can be applied to a detailed
procedural design or to source code. Basis path testing can
be seen as a set of steps.

Using the design or code as the basis, draw an appropriate
flow graph.
Determine the cyclomatic complexity of the resultant flow graph.
Determine a basis set of linear independent paths
Prepare test cases that will force execution of each path in
the basis set.
Date should be selected so that conditions at the predicate
nodes is tested. Each test case is executed and contrasted
with the expected result. Once all test cases have been
completed, the tester can ensure that all statements in the
program are executed at least once.

1.3.4 Graphical Matrices

The procedure involved in producing the flow graph and
establishing a set of basis paths can be mechanized. To
produce a software tool that helps in basis path testing, a
data structure, called a graph matrix, can be quite helpful.
A graph matrix is a square matrix whose size is the same as
the identified nodes, and matrix entries match the edges
between nodes. A basic flow graph and its associated graph
matrix is shown below.

















Flow graph




Connection to node
Node
1
2
3
4
5
1

a



2


b


3



d, c
f
4





5

e

g


Graph Matrix

In the graph and matrix each node is represented with a
number and each edge a letter. A letter is entered into the
matrix related to connection between the two nodes. By
adding a link weight for each matrix entry the graph matrix
can be used to examine program control structure during
testing. In its basic form the link weight is 1 or 0. The
link weights can be given more interesting characteristics:

The probability that a link will be executed.
The processing time expanded during traversal of a link
The memory required during traversal of a link

Represented in this form the graph matrix is called a
connection matrix.


Connection to node
Node
1
2
3
4
5
Connections
1

1



1-1=0
2


1


1-1=0
3



1,1
1
3-1=2
4





0
5

1

1

2-1=1

Cyclomatic complexity is 2+1=3

Graph matrix



1.4Control Structure Testing

Although basis path testing is simple and highly effective,
it is not enough in itself. Next we consider variations on
control structure testing that broaden testing coverage and
improve the quality of white box testing.

1.4.1 Condition Testing

Condition testing is a test case design approach that
exercises the logical conditions contained in a program
module. A simple condition is a Boolean variable or a
relational expression, possibly with one NOT operator. A
relational expression takes the form



where are arithmetic expressions and relational operator is
one of the following <, =, , (nonequality) >, or . A
compound condition is made up of two or more simple
conditions, Boolean operators, and parentheses. We assume
that Boolean operators allowed in a compound condition
include OR, AND and NOT.

The condition testing method concentrates on testing each
condition in a program. The purpose of condition testing is
to determine not only errors in the conditions of a program
but also other errors in the program. A number of condition
testing approaches have been identified. Branch testing is
the most basic. For a compound condition, C, the true and
false branches of C and each simple condition in C must be
executed at least once.

Domain testing needs three and four tests to be produced for
a relational expression. For a relational expression of the
form



Three tests are required the make the value of greater
than, equal to and less than , respectively.

1.4.2 Data Flow Testing

The data flow testing method chooses test paths of a program
based on the locations of definitions and uses of variables
in the program. Various data flow testing approaches have
been examined. For data flow testing each statement in
program is allocated a unique s6atement number and that each
function does not alter its parameters or global variables.
For a statement with S as its statement number,

DEF(S) = {X| statement S contains a definition of X}

USE(S) = {X| statement S contains a use of X}

If statement S is an if or loop statement, ifs DEF set is
left empty and its USE set is founded on the condition of
statement S. The definition of a variable X at statement S
is live at statement S’ if there exists a path from
statement S to S’ which does not contain any condition of X.

A definition-use chain (or DU chain) of variable X is of the
type [X,S,S’] where S and S’ are statement numbers, X is in
DEF(S), USE(S’), and the definition of X in statement S is
live at statement S’.

One basic data flow testing strategy is that each DU chain
be covered at least once. Data flow testing strategies are
helpful for choosing test paths of a program including
nested if and loop statements.

1.4.3 Loop Testing

Loops are the basis of most algorithms implemented using
software. However, often we do consider them when
conducting testing. Loop testing is a white box testing
approach that concentrates on the validity of loop
constructs. Four loops can be defined: simple loops,
concatenate loops, nested loops, and unstructured loops.

Simple loops: The follow group of tests should be used on
simple loops, where n is the maximum number of allowable
passes through the loop:

Skip the loop entirely.
Only one pass through the loop.
Two passes through the loop.
M passes through the loop where m<n.
n-1, n, n+1 passes through the loop.









Simple loop

Nested loop: For the nested loop the number of possible
tests increases as the level of nesting grows. This would
result in an impractical number of tests. An approach that
will help to limit the number of tests:

Start at the innermost loop. Set all other loops to minimum
values.
Conduct simple loop tests for the innermost loop while
holding the outer loop at their minimum iteration parameter
value.
Work outward, performing tests for the next loop, but
keeping all other outer loops at minimum values and other
nested loops to “typical” values.
Continue until all loops have been tested.






















Nested loop

Concatenated loops: Concatenated loops can be tested using
the techniques outlined for simple loops, if each of the
loops is independent of the other. When the loops are not
independent the approach applied to nested loops is recommended.















Concatenated loops
Unstructured loops: This class of loop should be redesigned
to reflect the use of the structured programming constructs.

1.5 Black Box Testing

Black box testing approaches concentrate on the fundamental
requirements of the software. Black box testing allows the
software engineer to produce groups of input situations that
will fully exercise all functional requirements for a
program. Black box testing is not an alternative to white
box techniques. It is a complementary approach that is
likely to uncover a different type of errors that the white
box approaches.

Black box testing tries to find errors in the following
categories:
(1) incorrect or missing functions, (2) interface errors,
(3) errors in data structures or external database access,
(4) performance errors, and (5) initialization and
termination errors.

By applying black box approaches we produce a set of test
cases that fulfill requirements: (1) test cases that reduce
the number of test cases to achieve reasonable testing, (2)
test cases that tell use something about the presence or
absence of classes of errors.

1.5.1Equivalent Partitioning

Equivalence partitioning is a black box testing approach
that splits the input domain of a program into classes of
data from which test cases can be produced. An ideal test
case uncovers a class of errors that may otherwise before
the error is detected. Equivalence partitioning tries to
outline a test case that identifies classes of errors.

Test case design for equivalent partitioning is founded on
an evaluation of equivalence classes for an input condition.
An equivalence class depicts a set of valid or invalid
states for the input condition. Equivalence classes can be
defined based on the following:

If an input condition specifies a range, one valid and two
invalid equivalence classes are defined.
If an input condition needs a specific value, one valid and
two invalid equivalence classes are defined.
If an input condition specifies a member of a set, one valid
and one invalid equivalence class is defined.
If an input condition is Boolean, one valid and invalid
class are outlined.

1.5.2 Boundary Value Analysis

A great many errors happen at the boundaries of the input
domain and for this reason boundary value analysis was
developed. Boundary value analysis is test case design
approach that complements equivalence partitioning. BVA
produces test cases from the output domain also.
Guidelines for BVA are close to those for equivalence
partitioning:

If an input condition specifies a range bounded by values a
and b, test cases should be produced with values a and b,
just above and just below a and b, respectively.
If an input condition specifies various values, test cases
should be produced to exercise the minimum and maximum numbers.
Apply guidelines above to output conditions.
If internal program data structures have prescribed
boundaries, produce test cases to exercise that data
structure at its boundary.

1.5.3Cause-Effect Graphing Techniques

In too many instances, an attempt to translate a policy or
procedure stated in a natural language into a software
causes frustration and problems. Cause-effect graphing is a
test case design approach that offers a concise depiction of
logical conditions and associated actions. The approach has
four stages:

Cause (input conditions) and effects (actions) are listed
for a module and an identifier is allocated to each.
A cause-effect graph is created.
The graph is altered into a decision table.
Decision table rules are modified to test cases.

A simplified version of cause-effect graph symbology is
shown below. The left hand column of the figure gives the
various logical associations among causes and effects .
The dashed notation in the right-hand columns indicates
potentials constraining associations that might apply to
either causes or effects.



Symbology Constraints










Cause-effect graphing

1.5.4Comparison Testing

Under certain situations the reliability of the software is
critical. In these situations redundant software and
hardware is often used to ensure continuing functionality.
When redundant software is produced separate software
engineering teams produce independent versions of an
application using the same applications. In this context
each version can be tested with the same test data to ensure
they produce the same output. These independent versions
are the basis of a black box testing technique known as
comparison testing. Other black box testing techniques are
performed on the separate versions and it is assumed if they
produce the same output they are assumed to be identical.
However, if this is not the case then they are examined further.

1.6Testing for Real-Time Systems

The specific characteristics of real-time systems makes them
a major challenge when testing. The time-dependent nature
of real-time applications adds a new difficult element to
testing. Not only does the developer have to look at black
and white box testing, but also the timing of the data and
the parallelism of the tasks. In many situation test data
for real-time system may produce errors when the system is
in one state but to in others. Comprehensive test cases
design methods for real-time systems have not evolved yet.
However, a four-stage approach can be put forward:

Task testing: The first stage is to test independently the
tasks of the real-time software.
Behavioural testing: Using system models produced with CASE
tools the behaviour of the real-time system and examine its
actions as a result of external events.
Intertask testing: Once errors in individual tasks and in
system behaviour have been observed testing passes to
time-related external events.
Systems testing: Software and hardware are integrated and a
full set of systems tests are introduced to uncover errors
at the software and hardware interface.

1.7Automated Testing Tools

As testing can be 40% of the all effort expanded on the
software development process tools that can assist by
reducing the time involved is useful. As a response to this
various researchers have produced sets of testing tools.

Miller described various categories for test tools:

Static analyzers: These program-analysis support “proving”
of static allegations-weak statements about program
architecture and format.
Code auditors: These special-purpose filters are used to
examine the quality of software to ensure that it meets the
minimum coding standards.
Assertion processors: These systems tell whether the
programmer-supplied assertions about the program are
actually meet.
Test data generators: These processors assist the user with
selecting the appropriate test data.
Output comparators: This tool allows us to contrast one set
of outputs from a program with another set to determine the
difference among them.

Dunn also identified additional categories of automated
tools including:

Symbolic execution systems: This tool performs program
testing using algebraic input, instead of numeric data values.
Environmental simulators: This tool is a specialized
computer-based system that allows the tester to model the
external environment of real-time software and simulate
operating conditions.
Data flow analyzers: This tool tracks the flow of data
through the system and tries to identify data related errors.

2.Software Testing Strategies

A strategy for software testing integrates software test
case design techniques into a well-planned set of steps that
cause the production of software. A software test strategy
provides a road map for the software developer, the quality
assurance organization, and the customer. Any testing
strategy needs to include test planning, test case design,
test execution, and the resultant data collection
evaluation. A software test strategy should be flexible
enough to promote the creativity and customization that are
required to adequately test all large software-based systems.

2.1A Strategic Approach to Software Testing

Testing is a group of activities that can be planned in
advance and performed systematically. For this reason a set
of stages that we can place particular tests case design
techniques and test approaches should be developed for the
software engineering procedure. A number of testing
strategies have been identified, which provide a template
for testing and all have the following features:

Testing starts at the modular level and works outward
towards the integration of the complete system.
Diverse testing techniques are appropriate at diverse points
in time.
Testing is performed by the developer of the software and an
independent test group.
Testing and debugging ate diverse activities, but debugging
must be included in any testing strategy.

A strategy for testing must include low-level tests that are
required to verify that a small source code segment has been
implemented correctly as well as high-level tests that that
validate major system functions based on customer requirements.

2.1.1Verifications and Validations

Software testing is one type of a broader domain that is
known as verification and validation (V&V). Verification
related to a set of operations that the software correctly
implements a particular function. Validation related to a
different set of activities that ensures that the software
that has been produced is traceable to customer needs.

2.1.2Organizing for Software Testing

For each software project, there is an inherent that happens
as testing starts. The people who produce the software are
required to test the software. Unfortunately, these
developers have an interest in showing that the program is
error free, it matches the customer’s needs and was
completed on-time and within budget. The role of an
independent test group (ITG) is to take out the inherent
difficulty associated with allowing the builder to test the
things that are built. The ITG works with the developer
through out the project to ensure that the testing carried
out is at the correct level. The ITG is part of the
software development process in that it becomes involved
during the specification stage and stays through out the
project.

2.1.3A Software Testing Strategy

The software engineering procedure can be seen as a spiral.
Initially the systems engineering states the role of the
software and lead the software requirement analysis, where
the information domain, function, behaviour, performance and
validation criteria for the software are identified. Moving
inwards along the spiral, we come to design and finally
coding.

A strategy for software testing may be to move upward along
the spiral. Unit testing happens at the vortex of the
spiral and concentrates on each unit of the software as
implemented by the source code. Testing happens upwards
along the spiral to integration testing, where the focus is
on design and the production of the software architecture.
Finally we perform system testing, where software and other
system elements are tested together.

2.1.4Criteria for Completion Testing

A fundamental question in software testing is how do we know
when testing is complete. Software engineers need to have
rigorous criteria for establishing when testing is complete.
Musa and Ackerman put forward an approach based on
statistical response that states that we can predict how
long a program will go before failing with a stated
probability using a certain model. Using statistical
modeling and software reliability theory, models of software
failure as a test of execution time can be produced. A
version of failure model, known as logarithmic Poisson
execution-time model, takes the form




where f(t) = cumulative number of failures that are
anticipated to happen once the software has been tested for
a particular amount of execution time t

= the initial failure intensity at the start of testing

p = the exponential reduction in failure intensity as errors
are discovered and repairs produced.

The instantaneous failure intensity, l(t) can be derived by
taking the derivative of f(t):

(a)

Using the relationship noted in equation (a), testers can
estimate the drop off of errors as testing progresses. The
actual error intensity can be plotted against the estimated
curve. If the actual data gained during testing and the
Logarithmic Poisson execution-time model are reasonably
close to another over a number of data points, the model can
be used to estimate the total test time required to produce
an acceptably low failure intensity.

2.2 Unit Testing

Unit testing concentrates verification on the smallest
element of the program – the module. Using the detailed
design description important control paths are tested to
establish errors within the bounds of the module.

2.2.1Unit test considerations

The tests that are performed as part of unit testing are
shown in the figure below. The module interface is tested
to ensure that information properly flows into and out of
the program unit being tested. The local data structure is
considered to ensure that data stored temporarily maintains
its integrity for all stages in an algorithm’s execution.
Boundary conditions are tested to ensure that the modules
perform correctly at boundaries created to limit or restrict
processing. All independent paths through the control
structure are exercised to ensure that all statements in
been executed once. Finally, all error-handling paths are
examined.












Unit test

2.2.2Unit test procedures

Unit testing is typically seen as an adjunct to the coding
step. Once source code has been produced, reviewed, and
verified for correct syntax, unit test case design can
start. A review of design information offers assistance for
determining test cases that should uncover errors. Each
test case should be linked with a set of anticipated
results. As a module is not a stand-alone program, driver
and/stub software must be produced for each test units. In
most situations a driver is a “main program” that receives
test case data, passes this to the module being tested and
prints the results. Stubs act as the sub-modules called by
the test modules. Unit testing is made easy if a module has
cohesion.

2.3 Integration Testing

Once all the individual units have been tested there is a
need to test how they were put together to ensure no data is
lost across interface, one module does not have an adverse
impact on another and a function is not performed correctly.
Integration testing is a systematic approach that produces
the program structure while at the same time producing tests
to identify errors associated with interfacing.

2.3.1 Top-Down integration

Top-down integration is an incremental approach to the
production of program structure. Modules are integrated by
moving downwards through the control hierarchy, starting
with the main control module. Modules subordinate to the
main control module are included into the structure in
either a depth-first or breadth-first manner. Relating to
the figure below depth-first integration would integrate the
modules on a major control path of the structure. Selection
of a major path is arbitrary and relies on application
particular features. For instance, selecting the left-hand
path, modules M1, M2, M5 would be integrated first. Next M8
or M6 would be integrated. Then the central and right-hand
control paths are produced. Breath-first integration
includes all modules directly subordinate at each level,
moving across the structure horizontally. From the figure
modules M2, M3 and M4 would be integrated first. The next
control level, M5, M6 etc., follows.




















The integration process is performed in a series of five stages:

1.The main control module is used as a test driver and stubs
are substituted for all modules directly subordinate to the
main control module.
2.Depending on the integration technique chosen, subordinate
stubs are replaced one at a time with actual modules.
3.Tests are conducted as each module is integrated.
4.On the completion of each group of tests, another stub is
replaced with the real module.
5.Regression testing may be performed to ensure that new
errors have been introduced.

2.3.2Bottom-up Integration

Bottom-up integration testing, begins testing with the
modules at the lowest level (atomic modules). As modules
are integrated bottom up, processing required for modules
subordinates to a given level is always available and the
need for stubs is eliminated.

A bottom-up integration strategy may be implemented with the
following steps:

1.Low-level modules are combined into clusters that perform
a particular software subfunction.
2.A driver is written to coordinate test cases input and output.
3.The cluster is tested.
4.Drivers are removed and clusters are combined moving
upward in the program structure.

2.3.3Comments on Integration Testing

There has been much discussion on the advantages and
disadvantages of bottom-up and top-down integration testing.
Typically a disadvantage is one is an advantage of the
other approach. The major disadvantage of top-down
approaches is the need for stubs and the difficulties that
are linked with them. Problems linked with stubs may be
offset by the advantage of testing major control functions
early. The major drawback of bottom-up integration is that
the program does not exist until the last module is included.

2.4Validation Testing

As a culmination of testing, software is completely
assembled as a package, interfacing errors have been
identified and corrected, and a final set of software tests
validation testing are started. Validation can be defined
in various ways, but a basic one is valid succeeds when the
software functions in a fashion that can reasonably expected
by the customer.

2.4.1Validation test criteria

Software validation is achieved through a series of black
box tests that show conformity with requirements. A test
plan provides the classes of tests to be performed and a
test procedure sets out particular test cases that are to be
used to show conformity with requirements.

2.4.2Configuration review

An important element of the validation process is a
configuration review. The role of the review is to ensure
that all the components of the software configuration have
been properly developed, are catalogued and have the
required detail to support the maintenance phase of the
software lifecycle.

2.4.3 Alpha and Beta testing

It is virtually impossible for develop to determine how the
customer will actually use the program. When custom
software is produced for customer a set of acceptance tests
are performed to allow the user to check all requirements.
Conducted by the end user instead of the developer, an
acceptance test can range from an informal test drive to
rigorous set of tests. Most developers use alpha and beta
testing to identify errors that only users seem to be able
to find. Alpha testing is performed at the developer’s
sites, with the developer checking over the customers
shoulder as they use the system to determine errors. Beta
testing is conducted at more than one customer locations
with the developer not being present. The customer reports
any problems they have to allow the developer to modify the
system.

2.5 System Testing

Ultimately, software is included with other system
components and a set of system validation and integration
tests are performed. Steps performed during software design
and testing can greatly improve the probability of
successful software integration in the larger system.
System testing is a series of different tests whose main aim
is to fully exercise the computer-based system. Although
each test has a different role, all work should verify that
all system elements have been properly integrated and form
allocated functions. Below we consider various system tests
for computer-based systems.

2.5.1Recovery Testing

Many computer-based systems need to recover from faults and
resume processing within a particular time. In certain
cases, a system needs to be fault-tolerant. In other cases,
a system failure must be corrected within a specified period
of time or severe economic damage will happen. Recovery
testing is a system test that forces the software to fail in
various ways and verifies the recovery is performed correctly.

2.5.2 Security Testing

Any computer-based system that manages sensitive information
or produces operations that can improperly harm individuals
is a target for improper or illegal penetration. Security
testing tries to verify that protection approaches built
into a system will protect it from improper penetration.
During security testing, the tester plays the role of the
individual who wants to enter the system. The tester may
try to get passwords through external clerical approaches;
may attack the system with customized software, purposely
produce errors and hope to find the key to system entry.
The role of the designer is to make entry to the system more
expensive than that which can be gained.

2.5.3 Stress Testing

Stress testing executes a system in the demands resources in
abnormal quantity, frequently or volume. A variation of
stress testing is an approach called sensitivity testing in
some situation a very small range of data contained with the
bounds of valid data for a program may cause extreme and
even erroneous processing or profound performance degradation.

2.6 The Art of Debugging

Debugging happens as a result of testing. When a test case
uncovers an error, debugging is the process that causes the
removal of that error.

2.6.1The Debugging Process

Debugging is not testing, but always happens as a response
of testing. The debugging process will have one of two
outcomes: (1) The cause will be found, corrected and
removed, or (2) the cause will not be found. Why is
debugging difficult?

The symptom and the cause are geographically remote.
The symptom may disappear when another error is corrected.
The symptom may actually be the result of nonerrors (eg
round off in accuracies).
The symptom may be caused by a human error that is not easy
to find.
The symptom may be intermittent.
The symptom might be due to the causes that are distributed
across various tasks on diverse processes.

2.6.2Psychological Considerations

There is evidence that debugging is an innate human trait.
Some people are good at it and others not. Although
experimental evidence on debugging can be considered in many
ways large variations in debugging ability has been
identified in software engineering of the same experience.

2.6.3 Debugging Approaches

Regardless of the approach that is used, debugging has one
main aim: to determine and correct errors. The aim is
achieved by using systematic evaluation, intuition, and good
fortune. In general three kinds of debugging approaches
have been put forward: Brute force, Back tracking and Cause
elimination.

Brute force is probably the most popular despite being the
least successful. We apply brute force debugging methods
when all else fails. Using a “let the computer find the
error” technique, memory dumps are taken, run-time traces
are invoked, and the program is loaded with WRITE
statements. Backtracking is a common debugging method that
can be used successfully in small programs. Beginning at
the site where a symptom has been uncovered, the source code
is traced backwards till the error is found. In cause
elimination a list of possible causes of an error are
identified and tests are conducted until each one is
eliminated.

2.7Conclusion

Software testing accounts for a large percentage of effort
in the software development process, but we have only
recently begun to understand the subtleties of systematic
planning, execution and control.

Is This Answer Correct ?    4 Yes 9 No



Post New Answer       View All Answers


Please Help Members By Posting Answers For Below Questions

Hi,Please can any one tell me about SAP Testing concepts.

1651


What are the RBI rules has to follow by a bank for online precesseing. ?

1798


Give the real-time example for back-to-back testing?

1487


What r the documents required for performance testing

1935


i want manual and automation test cases and interview questions

1786






how to clarify functional requirements and non functional requirements in a srs?

1681


What is the Myers Boundary Table?

1644


Which testing model is best as per your understanding, and why?

816


is it possible to do performance testing in stand alone application..how wil u do that ???????

1882


Describe any bug you remember. Plz Give some real examples

1743


What is validation in software testing?

795


What is test closure?

981


How to test an Scheduled event? For ex: in an investment banking application, the scheduler will create an equity anbd user has nothing to do but tester has to test if the instrument is created properly or not?

1791


What are the benefits of creating multiple actions within any virtual user script?

1804


What is bug severity?

768