Answer Posted / sirish
1. Test Strategy Identifier
The unique identifier for this Test Strategy is: <Test
Strategy ID>
2. Introduction
<Your audience may not know of the Service that you have
put this strategy in place for, provide a brief narrative
introduction to the product or service offering. Consider
including product history, reasons for introduction or
changes, expected outcome of the changes, who might use it
and the benefits of them using the new or enhanced product>
2.1. Purpose
The purpose of this Test Strategy is to define the overall
approach that will be taken by the QA Team when delivering
testing services to all of the projects within the business.
The document helps to clarify the testing activities, roles
and responsibilities, processes and practice to be used
across successive projects.
Where a project’s testing needs deviate from what is
covered by this Test Strategy the exceptions will be
detailed in the Test Plan.
3. Test Items
For each Release the QA Engineer will create a table of
Test Items that will be in scope of the testing being
planned. These will be identified from the Scope Items in a
given Release and include interrelated modules and
components of the service that will be affected by the
Scope Items.
In addition the QA Engineer will record any Test Items that
cannot be tested by the test team. The Test Plan will
contain Test Items that are In-Scope and Out-of-Scope.
4. Test Objectives
Describe the Objective of Testing, Testing should ensure
that Future Business Processes together with the enabling
technology, provided the expected business benefits.
Testing Objective could include:
• Verify products against their requirements (i.e.
was the product built right?)
• Validate that the product performs as expected
(i.e. was the right product built?)
• Ensure system components and business processes
work end- to – end
• Build a test model that can be used on an ongoing
basis
• Identity and resolve issues and risks.
5. Identity Test Types:
Describe the types of tests to be conducted to verify that
requirements have been met and to validate that the
system performs satisfactorily. Consider the types of tests
in the table below:
Type of test Definitions
Unit Testing Testing conducted to verify the
implementation of the design for one software element
(eg.., Unit, module)
Integration Testing An orderly progression of testing
in which software elements, hardware elements, are both are
combined and tested until the entire system is integrated
and tested
System Testing The process of testing an integrated
hardware and software system to verify that the system
meets its specified requirements
Acceptance Testing Formal testing conducted to
determine whether or not a system satisfies its acceptance
criteria and to enable the customer to determine whether or
not to accept the system
Performance Testing Performed to confirm that the
system meets performance goals such as turnaround times,
maximum delays, peak performance, etc.
Volume Testing Tests the system to verify that the system
can handle an expected volume profile
Stress testing Tests the entire system to find the limits
of performance
Configuration Testing Tests the product over all the
possible configurations on which it is supposed to run
Operational Readiness Testing Test the system to finds
defects that will prevent installation and deployment by
the users
Data Conversion and Load Testing Performed to verify
the correctness of automated or manual conversions and/or
loads of data in preparation for implementing the new
system
6. Scope of Testing
Describe the scope of testing. Consider the following when
defining scope:
• Test both business processes and the technical
solution
• Specify regions and sub-regions included in testing
• Identity interfaces with other projects
• Identity interfaces with external entities such as
dealers, suppliers, and join ventures
7. Test preparation and execution process
7.1 Test Preparation
Describe the steps for preparing for testing. The purpose
of Test Preparation is to verify that requirements are
understood and prepare for Test Execution. Steps for Test
Preparation may include:
• Identity test cases
• Identity test cycles
• Identity test data
• Develop expected results
• Develop test schedule(may be done as a part of Test
Plan)
• Obtain signoff
7.2 Test Execution
Describe the steps for executing tests. The purpose of Test
Execution is to execute the test cycles and test cases
created during the Test Preparation activity, compare
actual results to expected results, and resolve any
discrepancies. Steps for Test Execution may include:
• Verify entry criteria
• Conduct tests
• Compare actual results to expected results
• Investigate and resolve discrepancies
• Conduct regression test
• Verify exit criteria
• Obtain signoff
8. Test Data Management
Describe the approach for identifying and managing test
data. Consider the following guidelines:
• System and user acceptance tests – a subset of
production data could be used to initialize the test
environment. Because the focus of these tests is to
simulated the production environment and validate business
transactions, data integrity is extremely critical.
• Performance/volume/stress test – full size
production files should be used to test the performance and
volume aspects of the test. Additional ‘dummy’ data will be
created to stress the system. Data integrity is not
critical, as the test focuses on performance rather than
the ability to conduct business transactions.
• Operational readiness test – a copy of system/user
acceptance test data could be used for the operational
readiness test. Since the focus of the test is on
operational procedures, a low number of transactions will
be required and data integrity is not critical.
9. Features to be tested
The QA Engineer will use the Test Breakdown worksheet
(ref#) to record all of the features to be tested for each
of the Test Items in scope.
The Test Breakdowns will include details of the Test
Scenarios from which the Test Cases will be derived.
10. Features not to be tested
<What features would we usually not test in a project?
Security, Accessibility?>
Where it is not possible for the team to test features of a
Test Item that would have been expected or that would fall
under the scope of testing shown in section 10. Testing
Tasks, it will be recorded in section 5 of the Test Plan.
11. Approach
All testing tasks will be conducted in line with the
Software Test Life Cycle (STLC) and in support of the
Software Development Life Cycle (SDLC). The documents used
within the SDLC will be completed both by the QA Team and
the project participants that are responsible for providing
information and deliverables to QA.
It should be decided at the start of the project if there
will be a Post Implementation Review after project delivery
and this should be conducted within two weeks of project
completion.
<Narrative description of the high level Strategy>
<A description of the test methodology that will be
followed>
<Include a diagram where possible – V-Model, Scrum,
Waterfall>
<Touch on risks, discuss the use of test equipment or data,
etc. – use this paragraph to tone-down the formality of the
rest of the document and set the scene for the rest of the
plan, look to add value for the audience in the approach
and their support of it. Touch on anything that has no
clear area for inclusion in the Test Plan>
11.1. Analysis & Planning Phase Entry Criteria
For all projects the following criteria need to be met
before the Test Items are accepted into the Analysis &
Planning Phase:
• Release scope item list is locked and prioritised
• Documentation defining the scope items are approved
and at release status
• All documents are under change control processes
11.2. Analysis & Planning Phase Exit Criteria
For the Analysis & Planning phase to be completed and allow
items to move into the Test Phase the following criteria
need to be achieved:
• Test Breakdowns and Test Cases are written and peer
reviewed
• Knowledge Share document has been completed and
reviewed by the QA Engineers
• Walkthrough and sign-off completed for the Test
Plan and Test Breakdowns
• Defined Test Estimate has been published and agreed
• The list of features in the Test Breakdown have
been prioritised.
11.3. Test Phase Entry Criteria
Before Test Items are made available for QA to test it’s
expected that:
• The Test Item Transmittal Report will be completed
• All test tools are available and test
infrastructure are available for use during testing
• All Test Items are development complete
• The correct versions of the code have been deployed
to the correct test environments
• Sanity and Unit tests have been completed
successfully to demonstrate readiness for test
• Prepare and review all Test cases
• Establish Test Environment
• Receive build from Developer.
11.4. Test Phase Exit Criteria
For the Test Items to exit testing the following conditions
will have to be met:
• The Test Summary Report will be completed.
• All planned testing activities has been completed
to agreed levels.
• All high priority bugs have been fixed, retested
and passed.
• No defects must be left in an open unresolved
status.
11.5. Change Management
The Build Manager will ensure that once testing begins no
changes or modifications are made to the code used to
create the build of the product under test. The Build
Manager will inform QA against which version testing will
begin and confirm the location within
[VSS/Progress/Perforce/Subversion] the build is to be taken
from.
If changes or modifications are necessary through bug
resolution or for any other reason the Build Manager will
inform QA prior to the changes being made.
11.6. Notification / Escalation Procedures
The following diagram shows the notification and escalation
paths to be followed for the duration of the project Test
Phase.
11.7. Measures and Metrics
At the Initiation Phase of the project the QA Team will
publish a set of measures and metrics related to the test
activities of their Planning & Analysis and Execution
phases. The Test Plan also defines the milestone dates for
key deliverables such as the Test Plan and these are
metrics captured for ongoing statistical process analysis
across successive projects.
Test Preparation
• Number of Test Scenarios v. Number of Test Cases
• Number of Test Cases Planned v. Ready for Execution
• Total time spent on Preparation v. Planned time
Test Execution and Progress
• Number of Tests Cases Executed v. Test Cases Planned
• Number of Test Cases Passed, Failed and Blocked
• Total Number of Test Cases Passed by Test Item /
Test Requirements
• Total Time Spent on Execution vs Planned Time
Bug Analysis
• Total Number of Bugs Raised and Closed per Test Run
• Total Number of Bugs Closed v. Total Number of Bugs
Re-Opened
• Bug Distribution Totals by Severity per Test Run
• Bug Distribution Totals by Test Item by Severity
per Test Run
12. ‘Pass/Fail’ Criteria
Each Test Item will be assigned a Pass or Fail state
dependant on two criteria:
• Total number and severity of Bugs in an Open &
Unresolved state within Bugzilla/Bug Tracker.
• The level of successfully executed test
requirements.
The combination of both criteria will be used to recognise
the Test Item can be declared Test Complete. However as
this is a minimum level of quality that is believed
achievable it’s recommended that where project timescales
allow further testing and development should be conducted
to raise the overall quality level.
Table of Issue Severity
Severity Definition Maximum Allowable
S1 Crash/Legal – System crash, data loss, no
workaround, legal, Ship Killer 0
S2 Major – Operational error, wrong result
<Set by PM>
S3 Minor – Minor problems <Set by PM>
S4 Incidental – Cosmetic problems <Set by PM>
S5 N/A – Not Applicable; used for feature requests and
Development Tasks Reference Only
The total MAXIMUM number of issues recorded in Bugzilla /
Bug Tracker that can remain in an Open & Unresolved state
for the Test Item and be acceptable for release.
Table of Test Scenario Priority
Test Scenario Definition Minimum Pass Rate
P1 – Critical Essential to the Product 100%
P2 – Important Necessary to the Product
<Set by PM>
P3 – Desirable Preferred, but not essential to the
Product <Set by PM>
The MINIMUM set of Test Scenarios that must pass before the
Test Item can be considered for release.
Unforeseen issues arising during the Test Phase may impact
the agreed ‘Pass/Fail’ Criteria for the Test Item. Issues
can be managed through review with the QA Team and the
project authorities.
13. Suspension Criteria & Resumption Requirements
Testing of Test Items will be suspended if:
There are problems in Test Environment, show stopper
detected in Build and pending Defect are more
1a) Suspension criteria:
A Severity 1 issue is logged and requires fixing before
further testing can take place (a Blocking Issue)
1b) Resumption requirement:
The issue will need to be fixed before the Test Item is
returned to QA for testing.
2a) Suspension criteria:
Significant differences exist between observed behaviour of
the Test Item and that shown in Test Scenario, Test Case or
as expected from the previous version of the technology.
2b) Resumption requirement:
Development, QA and PM must come to a conclusion on
resolving the issue and agreeing a definition of the
expected behaviour.
3a) Suspension criteria:
A Test Item sent for testing fails more than 20% of
Developer Unit Tests.
3b) Resumption requirement:
The Test Item must be fixed or Unit Tests refactored if out
of date and then demonstrated to pass with <20% failure
rate.
14. Test Deliverables
The following artefacts will be produced during the testing
phase:
• Test Plan
Used to prescribe the scope, approach, resources, and
schedule of the testing activities. To identify the items
being tested, the features to be tested, the testing tasks
to be performed, the personnel responsible for each task,
and the risks associated with this plan.
• Test Schedule
This describes the tasks, time, sequence, duration and
assigned staff.
• Test Breakdown
Which includes the Test Scenarios, their priority and
related number of Test Cases along with the defined
estimates for time to write and execute the Test Cases?
• Test Cases
Detail the pre-conditions, test steps and expected and
actual outcome of the tests. There will be positive and
negative test cases.
• Periodic progress and metric update reports
• Bug Reporting
• Test Summary Reports
15. Testing Tasks
The Testing Tasks that the QA Team will deliver cover the
following scope:
• Fully In Scope: Functional and Regression Testing
• Partially in Scope: Cross Browser Compatibility,
Integration in the Large.
• Out of Scope: Performance testing, Automated
Regression, all forms of Non-Functional,
Accessibility Compliance Testing, Security
Testing, User Documentation Review.
16. Environmental and Infrastructure Needs
The following detail the environmental and infrastructure
needs required for the testing of lastminute.com Test Items
and execution of Regression Testing.
Hardware.
• Integration Environment:
• QA-A: http://.....
• QA-B: http://....
• Pre-live Staging:
Software
• <Name of Bug Tracking Tool>: http://...
• <Name of Test Case Management Tool>: http://
Infrastructure
• Network connections are available on all Test
Systems as required.
Test Repository
• http://...
17. Responsibility Matrix
The table below outlines the main responsibilities in brief
for test activities:
Activity Product Manager Development
Manager Test Manager Test Engineer
Provision of Technical Documents X X
Test Planning and Estimation X X
Review and Sign off Test Plan X X X
Testing Documentation X X
Test Preparation and Execution X
Test Environment Set-up X
Change Control of Test Environments X
X
Provision of Unit Tested Test Items X
Bug fixes and return to QA for re-test X
Product Change Control X X X
Ongoing Test Reporting X X
Test Summary Reporting X
18. Staffing and Training Needs
Staffing
Staffing levels for the test activities will be:
• 1 x Test Manager for the duration of test planning
at 50% effort against plan.
• The required number of QA Engineers for the
duration of test execution at 100% effort against plan.
Training
For each project the training needs will be assessed and
defined in the Test Plan.
19. Schedules and Resource Plans
Team Plan
The QA Team will maintain a Team Plan which records
individual assignment to testing tasks against assignable
days. This will also record time planned and delivered
against the tasks which will be used to update relevant
Project Schedules and be used in periodic reporting.
Test Schedule
The Test Schedule for the Release will be located within
<Document Store Name> at: http://
20. Risks and Contingencies
Risk Mitigation Strategy Impact
1 Delays in delivering completed Test Items from
Development would impact test timescales and final Release
quality Product Management and Development to advise of any
delays and adjust Release Scope of Resources to allow the
test activities to be performed. High
2 Delays in the turn around time for fixing critical
bugs, which would require re-testing, could have an impact
on the project dates. Strong management of bug resolution
would be required from Development to ensure bugs are fixed
and available for re-testing in the scheduled time.
High
3 The QA, Development or PM teams require domain
guidance from one or the other and they are not available.
This would delay project activities. QA, Development and
PM teams to ensure they are available at critical points or
contactable during the project activities. Medium
4 Features of Test Items will not be testable. QA
will record untested features and request the PM to assess
business risk in support of the release of untested
features. Low
5 Unexpected dependencies between Test Items and
service components are encountered that require revision of
Test Scenarios and related Test Cases. Information about
dependencies is updated and communicated promptly to allow
timely revision of Test Scenarios and Test Cases Low
21. Approvals
The following people are required to approve the Test
Strategy
Approval By Approval
Test Manager
QA Department Manager
Product Owner
Development Manager
Project Manager
Is This Answer Correct ? | 7 Yes | 16 No |
Post New Answer View All Answers
What is XML Testing? Do we have any tools to test the XML? Please let me know.
how can you test the web environment for security testing?
What processes/methodologies are you familiar with?
Describe to me what you see as a process. Not a particular process, just the basics of having a process.
What is white box testing?
I have an UI issue "please fill out this field" pop up in firefox browser keeps scrolling with the page is it valid???
name poneno dept sun 9894433467 computer This is the xls sheet a programs written to transfer this data into database write test cases and test scenario?
what is base time , how can we base time your doc"s ?
What is difference between Bug resolution meeting & Bug Review Committee? Who are participants?
What is the difference between three tier and two tier application?
tell me 3 defects in FRS document, while doing FRS review which u have found in the last 3 years ? could u plz answer brahma412@yahoo.co.in
How could we start writing testcases without having the FRS,BRS Docs? My Project has some docs that are related to explain the operations which will be done in the application. It is deployed in UAT server and is a maintainance project. Tell me some technipues or best way to provide the quality to application?
approx how long a simple website take time just for manual testing including report generation.
What is checkpoint? How can you handle the checkpoints?
Can anyone please suggest me a online book on manual testing??