Manual Testing Interview Question Answer Part 1

Risk-based testing is the term used for an approach to creating a test strategy that is based on prioritizing tests by risk. The basis of the approach is a detailed risk analysis and prioritizing of risks by risk level. Tests to address each risk are then specified, starting with the highest risk first.
Preventative tests are designed early; reactive tests are designed after the software has been produced.
The purpose of exit criteria is to define when a test level is completed.
The likelihood of an adverse event and the impact of the event determine the level of risk.
Decision table testing is used for testing systems for which the specification takes the form of rules or cause-effect combinations. In a decision table the inputs are listed in a column, with the outputs in the same column but below the inputs. The remainder of the table explores combinations of inputs to define the outputs produced.
To identify defects in any software work product.
It avoids author bias in defining effective tests.
The exit criteria is determined on the bases of 'Test Planning'.
Testing performed by potential customers at their own locations.
Rapid Application Development (RAD) is formally a parallel development of functions and subsequent integration. Components/functions are developed in parallel as if they were mini projects, the developments are time-boxed, delivered, and then assembled into a working prototype. This can very quickly give the customer something to see and use and to provide feedback regarding the delivery and their requirements. Rapid change and development of the product is possible using this methodology. However the product specification will need to be developed for the product at some point, and the project will need to be placed under more formal controls prior to going into production.
It helps prevent defects from being introduced into the code.
Testing technique: –
Is a process for ensuring that some aspects of the application system or unit functions properly there may be few techniques but many tools.

Testing Tools: –
Is a vehicle for performing a test process. The tool is a resource to the tester, but itself is insufficient to conduct testing
Component testing, also known as unit, module and program testing, searches for defects in, and verifies the functioning of software (e.g. modules, programs, objects, classes, etc.) that are separately testable. Component testing may be done in isolation from the rest of the system depending on the context of the development life cycle and the system. Most often stubs and drivers are used to replace the missing software and simulate the interface between the software components in a simple manner. A stub is called from the software component to be tested; a driver calls a component to be tested.
Testing the end to end functionality of the system as a whole is defined as a functional system testing.
Independent testers are unbiased and identify different defects at the same time.
The bulk of the test design work begun after the software or system has been produced.
There are currently seven different agile methodologies that I am aware of:

1.Extreme Programming (XP)
2.Scrum
3.Lean Software Development
4.Feature-Driven Development
5.Agile Unified Process
6.Crystal
7.Dynamic Systems Development Model (DSDM)
A 'Test Analysis' and 'Design' includes evaluation of the testability of the requirements and system.
Random testing often known as monkey testing. In such type of testing data is generated randomly often using a tool or automated mechanism. With this randomly generated input the system is tested and results are analysed accordingly. These testing are less reliable; hence it is normally used by the beginners and to see whether the system will hold up under adverse effects.
1.Provide developers and other parties with feedback about the problem to enable identification, isolation and correction as necessary.
2.Provide ideas for test process improvement.
3.Provide a vehicle for assessing tester competence.
4.Provide testers with a means of tracking the quality of the system under test.
Because they share the aim of identifying defects but differ in the types of defect they find.
In contrast to informal reviews, formal reviews follow a formal process. A typical formal review process consists of six main steps:
1.Planning
2.Kick-off
3.Preparation
4.Review meeting
5.Rework
6.Follow-up.
The moderator (or review leader) leads the review process. He or she determines, in co-operation with the author, the type of review, approach and the composition of the review team. The moderator performs the entry check and the follow-up on the rework, in order to control the quality of the input and output of the review process. The moderator also schedules the meeting, disseminates documents before the meeting, coaches other team members, paces the meeting, leads possible discussions and stores the data that is collected.
An input or output ranges of values such that only one value in the range becomes a test case.