what is test strategy?
Answers were Sorted based on User's Feedback
Answer / murali
It is a company level document developed bye quality
assurence manager or quality analyst category people. It
defines testing approach to reach the standards. During
test strategy document preparation QA people concentrate on
below factors:
1. scope and Objective
2. Budget control
3. Testing approach
4. Test deliverables
5. roles and responsibilities
6. communication and status reporting
7. automation tools (if needed)
8. testing meassrements
9. risks and litigations
10. change configuration management
11. training plan
| Is This Answer Correct ? | 94 Yes | 18 No |
Answer / punith
test strategy is the project level document which provides
various test plan details with reference of cost, coustmer
requirement, organisation moto and time. stratagy also
provides the at which stage of inspection.
| Is This Answer Correct ? | 15 Yes | 6 No |
Answer / varun
“How we plan to cover the product so as to
develop an adequate assessment of quality.”
A good test strategy is:
– Specific
– Practical
– Justified
The purpose of a test strategy is to clarify
the major tasks and challenges of the test
project.
Inputs : 1.A description of reqd hw and sw components,test
tool,test environment,test tool data. 2.Testing
methods,type,model,techniques. 3.Functional and technical
req of applcn.this information cums frm req,change
req,functional design doc. 4. Descrpn of
roles&responsibilities of resources.this information is
gathered frm man-hours and schedules.
Outputs :1. An approved and signed off test strategy
doc,test plan,including testcases. 2.Testing issues
requiring solutions.
| Is This Answer Correct ? | 7 Yes | 2 No |
Answer / priyanka agrawal
It is created to inform project managers, testers, and
developers about some key issues of the testing process.
This includes the testing objective, methods of testing new
functions, total time and resources required for the
project, and the testing environment.Test strategies
describes how the product risks of the stakeholders are
mitigated at the test-level, which types of test are to be
performed, and which entry and exit criteria apply. They are
created based on development design documents.
| Is This Answer Correct ? | 0 Yes | 0 No |
Answer / saranya
Test strategy is a Project level document which is written
at the time of recuriment phase in SDLC.it contains Test
Environment,How many peoples involes,Risk management,and
more.
| Is This Answer Correct ? | 5 Yes | 9 No |
Answer / sirish
1. Test Strategy Identifier
The unique identifier for this Test Strategy is: <Test
Strategy ID>
2. Introduction
<Your audience may not know of the Service that you have
put this strategy in place for, provide a brief narrative
introduction to the product or service offering. Consider
including product history, reasons for introduction or
changes, expected outcome of the changes, who might use it
and the benefits of them using the new or enhanced product>
2.1. Purpose
The purpose of this Test Strategy is to define the overall
approach that will be taken by the QA Team when delivering
testing services to all of the projects within the business.
The document helps to clarify the testing activities, roles
and responsibilities, processes and practice to be used
across successive projects.
Where a project’s testing needs deviate from what is
covered by this Test Strategy the exceptions will be
detailed in the Test Plan.
3. Test Items
For each Release the QA Engineer will create a table of
Test Items that will be in scope of the testing being
planned. These will be identified from the Scope Items in a
given Release and include interrelated modules and
components of the service that will be affected by the
Scope Items.
In addition the QA Engineer will record any Test Items that
cannot be tested by the test team. The Test Plan will
contain Test Items that are In-Scope and Out-of-Scope.
4. Test Objectives
Describe the Objective of Testing, Testing should ensure
that Future Business Processes together with the enabling
technology, provided the expected business benefits.
Testing Objective could include:
• Verify products against their requirements (i.e.
was the product built right?)
• Validate that the product performs as expected
(i.e. was the right product built?)
• Ensure system components and business processes
work end- to – end
• Build a test model that can be used on an ongoing
basis
• Identity and resolve issues and risks.
5. Identity Test Types:
Describe the types of tests to be conducted to verify that
requirements have been met and to validate that the
system performs satisfactorily. Consider the types of tests
in the table below:
Type of test Definitions
Unit Testing Testing conducted to verify the
implementation of the design for one software element
(eg.., Unit, module)
Integration Testing An orderly progression of testing
in which software elements, hardware elements, are both are
combined and tested until the entire system is integrated
and tested
System Testing The process of testing an integrated
hardware and software system to verify that the system
meets its specified requirements
Acceptance Testing Formal testing conducted to
determine whether or not a system satisfies its acceptance
criteria and to enable the customer to determine whether or
not to accept the system
Performance Testing Performed to confirm that the
system meets performance goals such as turnaround times,
maximum delays, peak performance, etc.
Volume Testing Tests the system to verify that the system
can handle an expected volume profile
Stress testing Tests the entire system to find the limits
of performance
Configuration Testing Tests the product over all the
possible configurations on which it is supposed to run
Operational Readiness Testing Test the system to finds
defects that will prevent installation and deployment by
the users
Data Conversion and Load Testing Performed to verify
the correctness of automated or manual conversions and/or
loads of data in preparation for implementing the new
system
6. Scope of Testing
Describe the scope of testing. Consider the following when
defining scope:
• Test both business processes and the technical
solution
• Specify regions and sub-regions included in testing
• Identity interfaces with other projects
• Identity interfaces with external entities such as
dealers, suppliers, and join ventures
7. Test preparation and execution process
7.1 Test Preparation
Describe the steps for preparing for testing. The purpose
of Test Preparation is to verify that requirements are
understood and prepare for Test Execution. Steps for Test
Preparation may include:
• Identity test cases
• Identity test cycles
• Identity test data
• Develop expected results
• Develop test schedule(may be done as a part of Test
Plan)
• Obtain signoff
7.2 Test Execution
Describe the steps for executing tests. The purpose of Test
Execution is to execute the test cycles and test cases
created during the Test Preparation activity, compare
actual results to expected results, and resolve any
discrepancies. Steps for Test Execution may include:
• Verify entry criteria
• Conduct tests
• Compare actual results to expected results
• Investigate and resolve discrepancies
• Conduct regression test
• Verify exit criteria
• Obtain signoff
8. Test Data Management
Describe the approach for identifying and managing test
data. Consider the following guidelines:
• System and user acceptance tests – a subset of
production data could be used to initialize the test
environment. Because the focus of these tests is to
simulated the production environment and validate business
transactions, data integrity is extremely critical.
• Performance/volume/stress test – full size
production files should be used to test the performance and
volume aspects of the test. Additional ‘dummy’ data will be
created to stress the system. Data integrity is not
critical, as the test focuses on performance rather than
the ability to conduct business transactions.
• Operational readiness test – a copy of system/user
acceptance test data could be used for the operational
readiness test. Since the focus of the test is on
operational procedures, a low number of transactions will
be required and data integrity is not critical.
9. Features to be tested
The QA Engineer will use the Test Breakdown worksheet
(ref#) to record all of the features to be tested for each
of the Test Items in scope.
The Test Breakdowns will include details of the Test
Scenarios from which the Test Cases will be derived.
10. Features not to be tested
<What features would we usually not test in a project?
Security, Accessibility?>
Where it is not possible for the team to test features of a
Test Item that would have been expected or that would fall
under the scope of testing shown in section 10. Testing
Tasks, it will be recorded in section 5 of the Test Plan.
11. Approach
All testing tasks will be conducted in line with the
Software Test Life Cycle (STLC) and in support of the
Software Development Life Cycle (SDLC). The documents used
within the SDLC will be completed both by the QA Team and
the project participants that are responsible for providing
information and deliverables to QA.
It should be decided at the start of the project if there
will be a Post Implementation Review after project delivery
and this should be conducted within two weeks of project
completion.
<Narrative description of the high level Strategy>
<A description of the test methodology that will be
followed>
<Include a diagram where possible – V-Model, Scrum,
Waterfall>
<Touch on risks, discuss the use of test equipment or data,
etc. – use this paragraph to tone-down the formality of the
rest of the document and set the scene for the rest of the
plan, look to add value for the audience in the approach
and their support of it. Touch on anything that has no
clear area for inclusion in the Test Plan>
11.1. Analysis & Planning Phase Entry Criteria
For all projects the following criteria need to be met
before the Test Items are accepted into the Analysis &
Planning Phase:
• Release scope item list is locked and prioritised
• Documentation defining the scope items are approved
and at release status
• All documents are under change control processes
11.2. Analysis & Planning Phase Exit Criteria
For the Analysis & Planning phase to be completed and allow
items to move into the Test Phase the following criteria
need to be achieved:
• Test Breakdowns and Test Cases are written and peer
reviewed
• Knowledge Share document has been completed and
reviewed by the QA Engineers
• Walkthrough and sign-off completed for the Test
Plan and Test Breakdowns
• Defined Test Estimate has been published and agreed
• The list of features in the Test Breakdown have
been prioritised.
11.3. Test Phase Entry Criteria
Before Test Items are made available for QA to test it’s
expected that:
• The Test Item Transmittal Report will be completed
• All test tools are available and test
infrastructure are available for use during testing
• All Test Items are development complete
• The correct versions of the code have been deployed
to the correct test environments
• Sanity and Unit tests have been completed
successfully to demonstrate readiness for test
• Prepare and review all Test cases
• Establish Test Environment
• Receive build from Developer.
11.4. Test Phase Exit Criteria
For the Test Items to exit testing the following conditions
will have to be met:
• The Test Summary Report will be completed.
• All planned testing activities has been completed
to agreed levels.
• All high priority bugs have been fixed, retested
and passed.
• No defects must be left in an open unresolved
status.
11.5. Change Management
The Build Manager will ensure that once testing begins no
changes or modifications are made to the code used to
create the build of the product under test. The Build
Manager will inform QA against which version testing will
begin and confirm the location within
[VSS/Progress/Perforce/Subversion] the build is to be taken
from.
If changes or modifications are necessary through bug
resolution or for any other reason the Build Manager will
inform QA prior to the changes being made.
11.6. Notification / Escalation Procedures
The following diagram shows the notification and escalation
paths to be followed for the duration of the project Test
Phase.
11.7. Measures and Metrics
At the Initiation Phase of the project the QA Team will
publish a set of measures and metrics related to the test
activities of their Planning & Analysis and Execution
phases. The Test Plan also defines the milestone dates for
key deliverables such as the Test Plan and these are
metrics captured for ongoing statistical process analysis
across successive projects.
Test Preparation
• Number of Test Scenarios v. Number of Test Cases
• Number of Test Cases Planned v. Ready for Execution
• Total time spent on Preparation v. Planned time
Test Execution and Progress
• Number of Tests Cases Executed v. Test Cases Planned
• Number of Test Cases Passed, Failed and Blocked
• Total Number of Test Cases Passed by Test Item /
Test Requirements
• Total Time Spent on Execution vs Planned Time
Bug Analysis
• Total Number of Bugs Raised and Closed per Test Run
• Total Number of Bugs Closed v. Total Number of Bugs
Re-Opened
• Bug Distribution Totals by Severity per Test Run
• Bug Distribution Totals by Test Item by Severity
per Test Run
12. ‘Pass/Fail’ Criteria
Each Test Item will be assigned a Pass or Fail state
dependant on two criteria:
• Total number and severity of Bugs in an Open &
Unresolved state within Bugzilla/Bug Tracker.
• The level of successfully executed test
requirements.
The combination of both criteria will be used to recognise
the Test Item can be declared Test Complete. However as
this is a minimum level of quality that is believed
achievable it’s recommended that where project timescales
allow further testing and development should be conducted
to raise the overall quality level.
Table of Issue Severity
Severity Definition Maximum Allowable
S1 Crash/Legal – System crash, data loss, no
workaround, legal, Ship Killer 0
S2 Major – Operational error, wrong result
<Set by PM>
S3 Minor – Minor problems <Set by PM>
S4 Incidental – Cosmetic problems <Set by PM>
S5 N/A – Not Applicable; used for feature requests and
Development Tasks Reference Only
The total MAXIMUM number of issues recorded in Bugzilla /
Bug Tracker that can remain in an Open & Unresolved state
for the Test Item and be acceptable for release.
Table of Test Scenario Priority
Test Scenario Definition Minimum Pass Rate
P1 – Critical Essential to the Product 100%
P2 – Important Necessary to the Product
<Set by PM>
P3 – Desirable Preferred, but not essential to the
Product <Set by PM>
The MINIMUM set of Test Scenarios that must pass before the
Test Item can be considered for release.
Unforeseen issues arising during the Test Phase may impact
the agreed ‘Pass/Fail’ Criteria for the Test Item. Issues
can be managed through review with the QA Team and the
project authorities.
13. Suspension Criteria & Resumption Requirements
Testing of Test Items will be suspended if:
There are problems in Test Environment, show stopper
detected in Build and pending Defect are more
1a) Suspension criteria:
A Severity 1 issue is logged and requires fixing before
further testing can take place (a Blocking Issue)
1b) Resumption requirement:
The issue will need to be fixed before the Test Item is
returned to QA for testing.
2a) Suspension criteria:
Significant differences exist between observed behaviour of
the Test Item and that shown in Test Scenario, Test Case or
as expected from the previous version of the technology.
2b) Resumption requirement:
Development, QA and PM must come to a conclusion on
resolving the issue and agreeing a definition of the
expected behaviour.
3a) Suspension criteria:
A Test Item sent for testing fails more than 20% of
Developer Unit Tests.
3b) Resumption requirement:
The Test Item must be fixed or Unit Tests refactored if out
of date and then demonstrated to pass with <20% failure
rate.
14. Test Deliverables
The following artefacts will be produced during the testing
phase:
• Test Plan
Used to prescribe the scope, approach, resources, and
schedule of the testing activities. To identify the items
being tested, the features to be tested, the testing tasks
to be performed, the personnel responsible for each task,
and the risks associated with this plan.
• Test Schedule
This describes the tasks, time, sequence, duration and
assigned staff.
• Test Breakdown
Which includes the Test Scenarios, their priority and
related number of Test Cases along with the defined
estimates for time to write and execute the Test Cases?
• Test Cases
Detail the pre-conditions, test steps and expected and
actual outcome of the tests. There will be positive and
negative test cases.
• Periodic progress and metric update reports
• Bug Reporting
• Test Summary Reports
15. Testing Tasks
The Testing Tasks that the QA Team will deliver cover the
following scope:
• Fully In Scope: Functional and Regression Testing
• Partially in Scope: Cross Browser Compatibility,
Integration in the Large.
• Out of Scope: Performance testing, Automated
Regression, all forms of Non-Functional,
Accessibility Compliance Testing, Security
Testing, User Documentation Review.
16. Environmental and Infrastructure Needs
The following detail the environmental and infrastructure
needs required for the testing of lastminute.com Test Items
and execution of Regression Testing.
Hardware.
• Integration Environment:
• QA-A: http://.....
• QA-B: http://....
• Pre-live Staging:
Software
• <Name of Bug Tracking Tool>: http://...
• <Name of Test Case Management Tool>: http://
Infrastructure
• Network connections are available on all Test
Systems as required.
Test Repository
• http://...
17. Responsibility Matrix
The table below outlines the main responsibilities in brief
for test activities:
Activity Product Manager Development
Manager Test Manager Test Engineer
Provision of Technical Documents X X
Test Planning and Estimation X X
Review and Sign off Test Plan X X X
Testing Documentation X X
Test Preparation and Execution X
Test Environment Set-up X
Change Control of Test Environments X
X
Provision of Unit Tested Test Items X
Bug fixes and return to QA for re-test X
Product Change Control X X X
Ongoing Test Reporting X X
Test Summary Reporting X
18. Staffing and Training Needs
Staffing
Staffing levels for the test activities will be:
• 1 x Test Manager for the duration of test planning
at 50% effort against plan.
• The required number of QA Engineers for the
duration of test execution at 100% effort against plan.
Training
For each project the training needs will be assessed and
defined in the Test Plan.
19. Schedules and Resource Plans
Team Plan
The QA Team will maintain a Team Plan which records
individual assignment to testing tasks against assignable
days. This will also record time planned and delivered
against the tasks which will be used to update relevant
Project Schedules and be used in periodic reporting.
Test Schedule
The Test Schedule for the Release will be located within
<Document Store Name> at: http://
20. Risks and Contingencies
Risk Mitigation Strategy Impact
1 Delays in delivering completed Test Items from
Development would impact test timescales and final Release
quality Product Management and Development to advise of any
delays and adjust Release Scope of Resources to allow the
test activities to be performed. High
2 Delays in the turn around time for fixing critical
bugs, which would require re-testing, could have an impact
on the project dates. Strong management of bug resolution
would be required from Development to ensure bugs are fixed
and available for re-testing in the scheduled time.
High
3 The QA, Development or PM teams require domain
guidance from one or the other and they are not available.
This would delay project activities. QA, Development and
PM teams to ensure they are available at critical points or
contactable during the project activities. Medium
4 Features of Test Items will not be testable. QA
will record untested features and request the PM to assess
business risk in support of the release of untested
features. Low
5 Unexpected dependencies between Test Items and
service components are encountered that require revision of
Test Scenarios and related Test Cases. Information about
dependencies is updated and communicated promptly to allow
timely revision of Test Scenarios and Test Cases Low
21. Approvals
The following people are required to approve the Test
Strategy
Approval By Approval
Test Manager
QA Department Manager
Product Owner
Development Manager
Project Manager
| Is This Answer Correct ? | 7 Yes | 16 No |
Answer / rajeshwar rao
Test strategy is a companyleval document. it is developed
by the QA analyst
test strategy defind by the testing activities followed by
the testing team.
the componetes== 1. business issue.2.defect tracking.3.
test environment.
| Is This Answer Correct ? | 10 Yes | 22 No |
Answer / subba rao. nunna
Test strategy is a document prepared by the P.M. It
contains the:-
1. Testing approach
2. What type of Testing methodalagies to be followed
3. which modules to be test.
| Is This Answer Correct ? | 10 Yes | 24 No |
Answer / srikanth.m
Test stratergy is one of the contents of the Test Plan
document(which is a strategic document to carry out testing
process).
| Is This Answer Correct ? | 7 Yes | 22 No |
Where will you give the Reproduce option in Bug report?
What is CMM?
What is error guessing?
20 Answers ABC, Thatavarti Technologies,
what is sdlc?explian test plan.
difference between 2 tier and 3tier?and what is the defination of tier?
1 Answers Ybrant Technologies,
The program prints data on the screen instead of the printer. Referring to the fault above, which load condition errors has occurred?
MPHASIS INTERVIEW QUESTIONS Explain Bug Life cycle What is integration testing and regression testing What is verification and validation How does winrunner recognizes the project which is in custom build What is expert view and tree view How does QTP identifies the project What is the difference between Winrunner and QTP What is Test Director and which version of test director your are using in your project What is a test plans what are the contents present in a test plan What is the risk of testing a project SEMANTIC SPACE INTERVIEW QUESTIONS What is configuration Management What is change management What are Severity and priority levels What is Data integrity and Data validity What are the browsers available and explain their versions Is it possible that Quality variates project to project What are GUI Map files and explain their Contents What is Data base check point and why we go for that What is the difference between the Client server application and a Web application Does Winrunner Supports Web applications What is process Management What is an Error, Defect, Bug What is Quality Which version of Winrunner you are using What is Build Verification and why we go for it What is Defect density What is Integration testing What is Static testing and types of Static testing What is Validation testing what is Data driven testing and why we go for data driven testing what is the definition of Testing What is Sanitation Testing MPHASIS INTERVIEW QUESTIONS (SECOND ROUND) What are joins and subjoins in the data bases what is data driven testing what is verification and validation What is Quality Assurance (QA) and Quality Control (QC) Is Verification is related to QA and Validation is related to QC ? which type of model you follow basically in your project what is a use case How to test a Bike how to test a Lift (Elevator). which configuration Management tool do you use which Bug tracking tool do you use explain Bug life cycle what is Regression testing what is ALPHA testing ans BETA testing What types of testings comes under Non functional testing what is TEST DIRECTOR what is CMM and CMM i whar are Expressions in Winrunner What is a Compile module Is it Necessary to open the tool first or the application first after getting a bug what will you do if you add a new object or a new module for the existing application then how will you test the application which Defect tracking tool do you use what is change management What is smoke Testing what is sanity testing A application is given to you but the requirements and functionalities are missing what will you do to start the testing which testing documents will be received by the client SYNCOTA INTERVIEW QUESTIONS 1.what is a test plan 2.Actually when will be the testing people will be involved in the project 3.Define low priority and high severity 4.Whom do you report the Bugs 5.what are the contents present in a test plan 6.which model will you follow 7.Which Bug tracking tool do you use and what are the contents present in it 8.If a bug is reported what will be the Next work done 9. How to write test cases using Use case diagrams 10.For a given condition (X>=0<100) how many test cases you can write ACCENTURE INTERVIEW QUESTIONS 1.what is BVA and ECP 2.Do we have recovery manger in Winrunner 3 how does the winrunner recognizes the project 4 how is winrunner is used in the project BLUESTAR INFOTECH INTERVIEW QUESTIONS What is the difference between Sleep and wait in winrunner what is integration testing and types of testing present what is system testing what is data driven testing what is functionality testing if requirements are given to you then how do you write the test cases for it Account number = Ok cancel For above given form the valid account number starts from 1 to 1000 so is it necessary to input the 1,2,3 ….998,999,1000 test cases to check the conditions of valid account numbers I,e is it compulsory to write 1000 test cases for the above form STAG SOFTWARE INTERVIEW QUESTIONS 1.what are the types of recording modes in winrunner 2.what are the three modes of running a test in winrunner 3.what is synchronization point 4.what are the contents present in a test case 5.do we need testing for the project 6.what is code review and code walk through At which phase the testing starts (begin) in the project How do we test Client server applications and a Web applications what is verification and validation what is adhoc testing what is traceability matrix what is base line document what is Test Life Cycle what is regression testing and retesting what is a BUG LIFE CYCLE what is a Bug what are defects what is functionality testing and system testing. a project has to be released in one day but the testing people got that project from the developers half of the day before so what they have to do to complete the testing process. 20 what are severity level and priority levels AZTEC SOFTWARE INTERVIEW QUESTIONS 1. What is BVA and ECP 2. Which version of WINRUNNER and QTP you are using in the project 3. What is the use of Recovery manager in QTP 4. What is retesting and regression testing 5. Write the SQL queries for the following EMP table to select the employee id and his details from emp table 6. For example: employee dept and employee id and employee name and employee salary 7. what is a sub and what is a driver 8. What are your roles and responsibilities in your project 9. Does BVA is equivalent to ECP 10. If you find a bug what will you do MANHATTAN ASSOCIATES TESTING INTERVIEW QUESTIONS Choose the correct answer 1.Testing work starts once a)once requirements are complete b)once test cases are complete c)once coding is complete d)Beginning of the requirement 2.For White box testing ; A)The tester is completely unconcerned about the internal behaviour of the program B) the tester is concerned with finding the circumstances in which the program does not behave according to code standards C) Test data is derived from market requirements 3.stress testing is a measure of one of the following a) applilcation response time b) concurrent load on application can handle c) transaction per minute that can be handled d) virtual load at which the system will fail 4.Susan reviews a QA test plan prepared by her counterpart and gives her comments. It’s a part of a)Quality control b)Quality Assurance c)Both d)none 5.user acceptance testing means ; a)testing performed on a single stand alone module b)testing after changes have been made to ensure that no unwanted changes were introduced c)testing to ensure that system meets the needs of the organization and the end user 6.Software Quality Metrics will help you to A) Keep account on the software quality of an application B) Analyze and improve the deficiencies of the application C) Both of the above D) None of the above 7) Regression Testing: a) Tests how well the program runs with Adverse data b) Tests program input branches c) Selective retesting to verify modifications have not cause adverse effects d) Testing data during requirements phase 8)Application that has GUI must be tested for usability a) For any kind of UI applications b) More necessary for Web- based applications c) Only if the user is going to use the UI d) Only if the user pays for it 9) the program according requirements expects the uppercase letters from A to Z .analyse the following inputs and choose one that can cause the failure with greater probability. a) @, [ b) 1,0 c) B, W d) [,] 10)A system test that forces the software to fail in a variety of ways and verifies that software is able to continue execution without interruption. This definition is nearest to; a) Recovery testing b) Stress Testing c) Both of the above e) None of the above 11)Automated testing is necessary when a) testing has to be done very quickly b) number of resources are very less c) most of the test cases are similar in nature d) all of the above 12) which of the following tests is performed early in a software testing process a)Monkey testing b)Unit testing c)System testing d)None of the above 13)The testing technique that requires devising test cases to demonstrate that each program function is operational is called a) Black box testing b) Glass-box testing c) Grey-Box testing d) White-Box Testing 14) What UML diagram is useful for black box testing a) Class diagram b) Object diagram c) Use case diagram d) Sequence diagram 15) What kind of tools would you suggest to use the automation of regression tests a) Capture and play back b) Static analysis c) Simulators e) Debugging tools Basics of Winrunner 1. In analog mode of recording the maximum value of X in (X, Y) a) the height of the screen, in Pixels, minus one. b) The height of the screen, in Pixels c) The width of the screen d) The width of the screen, in Pixels 2.During play back, you accidentally hit “pause’ a) Execution stops and you need to start from the beginning of the TSL again b) Executions stops and you can start off execution again from where you had stopped, all variables Have to be redefined c) Execution stops and you can start off execution again from where you had stopped, all variables will still remain initialized e) You need to restart win runner to execute again 3) To start recording on a Web based application using win runner and IE ; What of these is the best practice a) You have to shut down all Open IE sessions, open the application and then open win runner b) You have to shut down all Open IE sessions, open win runner and then open the application c) Open application and then open win runner d) Open win runner and then open the application 4)the types of variable declarations found in a TSL function are a)Static and Public b)auto, static and public c)auto ,static and extern d)need not be declared e)C&D 5)You have executed a tl_step function in TSL for a context sensitve case . if thecondition fails] a)winrunner report shows the whole Run as failed b)winrunner report shows only that step has failed c)winrunner report ignores the failure d)None of the above 6) Which of this is true a)You always require a GUI file to play a context – sensitive script b)you always require a GUI file to play an analog script c)Not necessary to have GUI file at all d)a&b e)none of the above 7) which of the following describes the window www.india.com in regularized form a)”!.India b)”!.*India c)”!India e) all of the above f) none of the above TESTING INTERVIEW QUESTIONS INDECOM GLOBAL SERVICES Explain Classical water fall model Integration testing is conducted by developers or testers an application is given how do you test for example the money deposited in account no.2 should be transferred to Account no.1 Account no: 1 Account no: 2 Ok Cancel what is V model What is BUG life cycle What is alpha testing and beta alpha testing What is verification testing and validation testing What are ISO and CMM standards levels Which should be given more preference I.e. severity or priority ? How to use the test director in your project What is use of recovery manager in QTP What are the add ins in QTP What are expressions in QTP if the developers are not convinced with your bug in this case what will you do if the developers are developing the application then what is the job assigned to the testers What is system testing What is smoke testing What is sanity testing What is sanitation testing
After receiving the build.How will you start the testing
what is the combination of grey box testing
example of high severity and low priority bug.
which basic priority u used in ur project?
write test case for gmail sign up page ?