SQA
Black box testing
not based on any knowledge of internal design or code. Tests are based on requirements and functionality.
White box testing
based on knowledge of the internal logic of an application’s code. Tests are based on coverage of code statements, branches, paths, conditions.
Unit testing
the most ‘micro’ scale of testing; to test particular functions or code modules. Typically done by the programmer and not by testers, as it requires detailed knowledge of the internal program design and code. Not always easily done unless the application has a well-designed architecture with tight code; may require developing test driver modules or test harnesses.
Incremental integration testing
continuous testing of an application as new functionality is added; requires that various aspects of an application’s functionality be independent enough to work separately before all parts of the program are completed, or that test drivers be developed as needed; done by programmers or by testers.
Integration testing
testing of combined parts of an application to determine if they function together correctly. The ‘parts’ can be code modules, individual applications, client and server applications on a network, etc. This type of testing is especially relevant to client/server and distributed systems.
Functional testing
black-box type testing geared to functional requirements of an application; this type of testing should be done by testers. This doesn’t mean that the programmers shouldn’t check that their code works before releasing it (which of course applies to any stage of testing.)
System testing
black box type testing that is based on overall requirement specifications; covers all combined parts of a system.
End-to-end testing
similar to system testing; the ‘macro’ end of the test scale; involves testing of a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.
Sanity testing
typically an initial testing effort to determine if a new software version is performing well enough to accept it for a major testing effort. For example, if the new software is crashing systems every 5 minutes, bogging down systems to a crawl, or destroying databases, the software may not be in a ’sane’ enough condition to warrant further testing in its current state.
Regression testing
re-testing after fixes or modifications of the software or its environment. It can be difficult to determine how much re-testing is needed, especially near the end of the development cycle. Automated testing tools can be especially useful for this type of testing.
Acceptance testing
final testing based on specifications of the end-user or customer, or based on use by end-users/customers over some limited period of time.
Load testing
testing an application under heavy loads, such as testing of a web site under a range of loads to determine at what point the systems response time degrades or fails.
Stress testing
term often used interchangeably with ‘load’ and ‘performance’ testing. Also used to describe such tests as system functional testing while under unusually heavy loads, heavy repetition of certain actions or inputs, input of large numerical values, large complex queries to a database system, etc.
Performance testing
term often used interchangeably with ’stress’ and ‘load’ testing. Ideally ‘performance’ testing (and any other ‘type’ of testing) is defined in requirements documentation or QA or Test Plans.
Usability testing
testing for ‘user-friendliness’. Clearly this is subjective, and will depend on the targeted end-user or customer. User interviews, surveys, video recording of user sessions, and other techniques can be used. Programmers and testers are usually not appropriate as usability testers.
Install/uninstall testing
testing of full, partial, or upgrade install/uninstall processes.
Recovery testing
testing how well a system recovers from crashes, hardware failures, or other catastrophic problems.
Security testing
testing how well the system protects against unauthorized internal or external access, willful damage, etc; may require sophisticated testing techniques.
Compatibility testing
testing how well software performs in a particular hardware/software/operating system/network/etc. environment.
Exploratory testing
often taken to mean a creative, informal software test that is not based on formal test plans or test cases; testers may be learning the software as they test it.
Ad-hoc testing
similar to exploratory testing, but often taken to mean that the testers have significant understanding of the software before testing it.
User acceptance testing
determining if software is satisfactory to an end-user or customer.
Comparison testing
comparing software weaknesses and strengths to competing products.
Alpha testing
testing of an application when development is nearing completion; minor design changes may still be made as a result of such testing. Typically done by end-users or others, not by programmers or testers.
Beta testing
testing when development and testing are essentially completed and final bugs and problems need to be found before final release. Typically done by end-users or others, not by programmers or testers.
Mutation testing
a method for determining if a set of test data or test cases is useful, by deliberately introducing various code changes (’bugs’) and retesting with the original test data/cases to determine if the ‘bugs’ are detected. Proper implementation requires large computational resources.
1. 1.What is software quality assurance?
Consists of planning, coordinating and other strategic activities associated with measuring product quality against external requirements and specifications (process-related activities).
2)Describe components of a typical test plan?
Test Plan.A high-level document that defines a testing project so that it can be properly measured and controlled. It defines the test strategy and organized elements of the test life cycle, including resource requirements, project schedule, and test requirements.
Consists of planning, coordinating and other strategic activities associated with measuring product quality against external requirements and specifications (process-related activities).
2)Describe components of a typical test plan?
Test Plan.A high-level document that defines a testing project so that it can be properly measured and controlled. It defines the test strategy and organized elements of the test life cycle, including resource requirements, project schedule, and test requirements.
2. What is the difference between QC and QA?
Quality assurance is the process where the documents for the proudct to be tested is verified with actual requirements of the customers. It includes inspection, auditing , code review , meeting etc.
Quality control is the process where the product is actually executed and the expected behaviour is veritied by comparing with the actual behaviour of the software under test. All the testing types like black box testing, white box testing comes under quality control.
Quality assurance is done before quality control.
1. What are the contents of test Plan?
1. Objective and scope of Test Plan.
2. Functionalities that needs to be tested and not to be tested.
3. Resourse Planning
4.Scheduling
5. test Startagy.
6. Test deliverables.
7. Testing terminology and metrics definition.
8. entry criteria and exit criteria.
9. Funticonalities that need to be automated and not be
10.Risks and contingency plan.
11 simple definitions are:
2. Functionalities that needs to be tested and not to be tested.
3. Resourse Planning
4.Scheduling
5. test Startagy.
6. Test deliverables.
7. Testing terminology and metrics definition.
8. entry criteria and exit criteria.
9. Funticonalities that need to be automated and not be
10.Risks and contingency plan.
11 simple definitions are:
2.
3. QA:assurance for process control.here we r going to follow certain quality standards and strive for process improvement.we r not going to deal with product.the intension is to follow good quality standards.if we follow these automatically we are going to produce better/best product.
that means we are following prevention mechanism where we are going to follow the proverb"prevention is better than cure".that means instead not following the standards and releasing a worst product and then following its better to follow earlier.
QC:control of the product.after finding defects we are going to rectify.
it is quite reverse to QA.we r curing the defects.
best examples for these QA ,QC are traffic rules.
that means we are following prevention mechanism where we are going to follow the proverb"prevention is better than cure".that means instead not following the standards and releasing a worst product and then following its better to follow earlier.
QC:control of the product.after finding defects we are going to rectify.
it is quite reverse to QA.we r curing the defects.
best examples for these QA ,QC are traffic rules.
. testing life cycle has following stages, 1. system study 2. test plan, 3.write test case, 4.traceability matrix, 5. execute test case, 6. defect tracking 7. test execution report, 8. retrospect
V Model
A traceability matrix is created by associating requirements with the work products that satisfy them. Tests are associated with the requirements on which they are based and the product tested to meet the requirement. ![Traceability concept](file:///C:/Users/PAPPU%7E1.MAT/AppData/Local/Temp/msohtmlclip1/01/clip_image003.gif)
![Traceability concept](file:///C:/Users/PAPPU%7E1.MAT/AppData/Local/Temp/msohtmlclip1/01/clip_image003.gif)
Above is a simple traceability matrix structure. There can be more things included in a traceability matrix than shown. In traceability, the relationship of driver to satisfier can be one-to-one, one-to-many, many-to-one, or many-to-many.
Traceability requires unique identifiers for each requirement and product. Numbers for products are established in a configuration management (CM) plan.
Traceability ensures completeness, that all lower level requirements come from higher level requirements, and that all higher level requirements are allocated to lower level requirements. Traceability is also used to manage change and provides the basis for test planning
Q. WHTS MEAN BY ENTRANCE & EXIT CRITERIA?????
COULD U GIVE ME SOME DETAILS ABT ENTRANCE AND EXIT CRITERIA????????////
COULD U GIVE ME SOME DETAILS ABT ENTRANCE AND EXIT CRITERIA????????////
Ans. Entry and Exit Criteria are not only for testing but for all phases of testing......
Entry Criteria:All the prerequisites for entering into a phase of a life cycle...for eg in testing the entry criteria would be UseCase Documents,TestPlan etc.,
Exit Criteria:All the deliverables from a phase in SDLC ...for eg..in testing the exit criteria would be the TestReport,Tracability Matrix etc.,
Entry Criteria:All the prerequisites for entering into a phase of a life cycle...for eg in testing the entry criteria would be UseCase Documents,TestPlan etc.,
Exit Criteria:All the deliverables from a phase in SDLC ...for eg..in testing the exit criteria would be the TestReport,Tracability Matrix etc.,
entry criteria has those factors that r must be present, at minimum and ready to start the activities,good example is integration testing, before the module can be interfaced, it should be compiled cleanly, and have an compiled succesfully on unit testing,
shortly when start the testing....
exit criteria means when stop the testing....
shortly when start the testing....
exit criteria means when stop the testing....
is an industry-standerd model for defining and measuring the "maturity" of a software company's development process and for providing direction on what they can do to improve their software quality.it was developed by the software development community along with the software engineering institute(SEI).
CMM software Maturity Levels:
Level1:Initial: The s/w development process at this level are adhoc and often chaotic.The project's success depends on heroes and luck.There are no general practices for planning,monitoring, or Controling the process.It's impossible to predict the time and cost to develop the software.The test process is just as adhoc as the rest of the process.
Level2:Repeatable: This maturity level is best describled as project level thinking.Basic project management processes are in place to track the cost,shedule,functionality, and quality of the product.Lessions learned from previous similar projects are applied.There is a scense of descipline.Basic software testing practices,such as test plans and test cases are used.
Level3:Defined: Organizational,not just project specific,thinking comes in to play at this level.Common management and engineering activities are standerdized and documented.These standerds are adapted and approved for use on different projects.The rules are not thrown out when things get stressfull.Test documents and plans are reviewed and approved before testing begins.The test group is independent form developers.The test results are used to determine when the s/w is ready.
Level4:Managed: At this maturity level,the organizations process is under statistical control.Product quality is specified quantitatively beforehand (for example, this product wont release until it has fewer than 0.5 defects per 1,000 lines of code) and the s/w isn't released untill that goal is met.details of the development process and the s/w quality are collected over the projects development, and adjustments are made to correct deviations and to keep the project on plan.
Level5:Optimizing: This level is called "optimizing"(not "optimized")because it's continually improving from level 4. new technologies and processes are attempted,the results are measured, and both incremental and revolutionary changes are instituted to achieve even better quality levels.Just when everyone thinks the best has been obtained.the crank is turned one more time, and th
Q. what is pair-wise testing ?
Ans. Pairwise (a.k.a. all-pairs) testing is an effective test case generation technique that is based on the observation that most faults are caused by interactions of at most two factors. Pairwise-generated test suites cover all combinations of two therefore are much smaller than exhaustive ones yet still very effective in finding defects.
Another definition:->
Pairwise testing is a specification-based testing criterion, which requires that for each pair of input parameters of a system, every combination of valid values of these two parameters be covered by at least one test case. In this paper, we propose a new test generation strategy for pairwise testing.
Q. what is the difference between usecase and test case
Ans .
A usecase is a:
A highlevel scenario where you specify the functionality of the application from a business perspective.
Whereas a testcase is:
The implementation of the highlevel scenario(usecase) wherein one gives detailed and step-by-step account of procedures to test a particular functionality of the application.
Q. What is Test Methodology?
Test Methodology is Overall Application Programming.(i.e) To Check the Application Program works without bugs. To check whether the Application Programs work properly or not use by our testcases.
Test methodology mean the different methods carried into order to test an application. The following are the test methodologies followed in Testing an Application or a product
Black - Box Testing
In using this strategy, the tester views the program as a black - box, tester doesn't see the code of the program: Equivalence partitioning, Boundary - value analysis, Error guessing.
In using this strategy, the tester views the program as a black - box, tester doesn't see the code of the program: Equivalence partitioning, Boundary - value analysis, Error guessing.
White - Box Testing
In using this strategy, the tester examine the internal structure of the program: Statement coverage, Decision coverage, condition coverage, Decision/Condition coverage, Multiple - condition coverage.
In using this strategy, the tester examine the internal structure of the program: Statement coverage, Decision coverage, condition coverage, Decision/Condition coverage, Multiple - condition coverage.
Gray - Box Testing
In using this strategy Black box testing can be combine with knowledge of database validation, such as SQL for database query and adding/loading data sets to confirm functions, as well as query the database to confirm expected result.
In using this strategy Black box testing can be combine with knowledge of database validation, such as SQL for database query and adding/loading data sets to confirm functions, as well as query the database to confirm expected result.
Test Script
Type of test file. It is a set of instructions run automatically by a software or hardware test tool.
Type of test file. It is a set of instructions run automatically by a software or hardware test tool.
Suite
A collection of test cases or scripts
A collection of test cases or scripts
Q. What type of methodology you are adapting to conduct black box testing?
Ans.
The varous methodologies used in BBT are
1. Equivalence Partitioning
2. Boundary value Analysis
3. Cause Effect Graphing
4. Error Guessing
Q. Difference Between error , bug , defect
Ans.
Error : Mistake done by developer in development environment is called Error
Bug : If that defect is found by a tester in testing environment is called bug
Defect : If the same error or bug found by the end user or Customer in his environment is called as Defect . A defect is also defined as ""Deviation from the desired product attribute""
Bug : If that defect is found by a tester in testing environment is called bug
Defect : If the same error or bug found by the end user or Customer in his environment is called as Defect . A defect is also defined as ""Deviation from the desired product attribute""
Test Plan: A document describing the scope, approach, resources, and schedule of intended testing activities. It identifies test items, the features to be tested, the testing tasks, who will do each task, and any risks requiring contingency planning.
Q. What is the exact different between CMM and CMMi?
Ans.
CMM describes the criteria to measure maturity of Software development organisation whereas
CMMi (Capability Maturity model Integration) is the result of combined efforts of industry, research and academic community. It involves performance, product quality and employee productivity.
The Capability Maturity Model (CMM), the first capability maturity model, a way to develop and refine an organization's processes. The first CMM was for the purpose of developing and refining software development processes.
There are 18 KPAs in CMM model intially. The Capability Maturity Model for Software has been retired, and CMMI replaces it. The SEI no longer maintains the SW-CMM model, its associated appraisal methods, or training materials, nor does the SEI offer SW-CMM training.
Capability Maturity Model Integration (CMMI) is a process improvement approach that provides organizations with the essential elements of effective processes. It can be used to guide process improvement across a project, a division, or an entire organization.
The capability or maturity levels for the company depends on the type of Representation you ar following:
There are 18 KPAs in CMM model intially. The Capability Maturity Model for Software has been retired, and CMMI replaces it. The SEI no longer maintains the SW-CMM model, its associated appraisal methods, or training materials, nor does the SEI offer SW-CMM training.
Capability Maturity Model Integration (CMMI) is a process improvement approach that provides organizations with the essential elements of effective processes. It can be used to guide process improvement across a project, a division, or an entire organization.
The capability or maturity levels for the company depends on the type of Representation you ar following:
Comparison of Capability and Maturity Levels
Continuous Representation Capability Levels
Level 0 - Incomplete
Level 1 - Performed
Level 2 - Managed
Level 3 - Defined
Level 4 - Quantitatively Managed
Level 5 - Optimizing
Level 0 - Incomplete
Level 1 - Performed
Level 2 - Managed
Level 3 - Defined
Level 4 - Quantitatively Managed
Level 5 - Optimizing
Staged Representation Maturity Levels
Level 0 - N/A
Level 1 - Initial
Level 2 - Managed
Level 3 - Defined
Level 4 - Quantitatively Managed
Level 5 - Optimizing
Level 0 - N/A
Level 1 - Initial
Level 2 - Managed
Level 3 - Defined
Level 4 - Quantitatively Managed
Level 5 - Optimizing
Q .can any one tell me what is the difference between stress and load testing?
and diff b/w integration testing and system testing?
and diff b/w integration testing and system testing?
Ans.
Stress testing --- Testing to evaluate a system or component at or beyond the limit of specified requirement.
Load testing --- Test type concerned with the behavior of a system or component with increasing load.
Load testing --- Test type concerned with the behavior of a system or component with increasing load.
Load Testing :- Load Testing is testing an application under heavy loads, such as the testing of a website under a range of loads to determine at what point the system response time will degrade or fail.
Stress Testing :- Testing is done at or beyond the limits of specified limits of performance, to check whether the s/w fails. The process of performing the stress testing is same as that of performance testing, but under higher load conditions.
Stress Testing :- Testing is done at or beyond the limits of specified limits of performance, to check whether the s/w fails. The process of performing the stress testing is same as that of performance testing, but under higher load conditions.
No comments:
Post a Comment