Testing – Software testing is a process used to identify the correctness, completeness and quality of software. Testing is done also to make sure that the final product is satisfying the client requirements and defect free.
Quality – Meeting the customer requirements not only for the 1st time, but for each and every time.
Positive Testing – Proving that the application is working correctly.
Negative Testing – Proving that the application doesn’t work correctly.
Error: It is the Deviation from actual and the expected value.
Bug: It is found in the development environment before the product is shipped to the respective customer.
Defect: It is found in the product itself after it is shipped to the respective customer.
Defect: It is found in the product itself after it is shipped to the respective customer.
Test Scope –the scope of your testing by identifying what you will test and what you will not. For example, you might limit your testing of client computer hardware to the minimum supported configurations or to the standard configurations.
Test Objective - Test objective is nothing but what are the requirements to have tested in that testing...
Test strategy – Process of identifying the areas that are to be focused while testing. It can also be defined as the process of identifying the area which is to be given higher importance while testing.
Test Plan – It plan is a document that describes the objectives, scope, approach and focus of a software testing effort.
Traceability matrix - Traceability Matrix is a document used for tracking the requirement, Test cases and the defect. This document is prepared to make the clients satisfy that the coverage done is complete as end to end, this document consists of Requirement/Base line doc Ref No., Test case/Condition, and Defects/Bug id. Using this document the person can track the Requirement based on the Defect id.
Test case – Test case can be defined as the set of procedures which is carried out in order to identify the deviation from expectations...
Test Suite – a test suite (more formally known as a validation suite) is a collection of test cases that are intended to be used as input to a software program to show that it has some specified set of behaviors (i.e., the behaviors listed in its specification).
Test Ware – Test Ware include test cases, Test plan, Test reports and etc. Test ware has significant value because it can be reused.
Test Bed – Test Bed can be defined as the rigid environment in which an application will be tested.
Metrics – Software metric is a measure of some property of a piece of software or its specifications.
Types of metrics – Process metrics – Measuring the efficiency of the development or testing process.
Product metrics – Measuring the performance and quality of a software.
Reliability metrics- Measuring how far is the application is reliable.
Cyclomatic metrics - It is a measure of software complexity and measures the number of linearly-independent paths through a program module
Severity – Impact of a bug on an application.
Priority - Importance given for fixing a bug.
Types of Severity –Critical: The software crashes, hangs, or causes you to lose data.
Major: A major feature is broken.
Minor: Minor loss of function, and there's an easy work around.
Types of priority - low, medium, high
Types of Testing:
Ad-hoc Testing – It is the least formal testing approach carried out randomly to check the functionality of an application. It is done to identify the defects which are not found through normal testing approach.
Sanity Testing – It is cursory testing process carried out in order to conform that the application is functioning according to the specifications. The whole application will be tested during this testing.
Smoke Testing – Testing a whole application to confirm the basic working ness of an application and its stability.
Gorilla Testing – Testing each and every functionality of a particular module in detail.
Monkey Testing – Randomly testing an application here and there to prove that it does not crash. Monkey testing will be carried out by using automation tools.
End To End Testing – Testing an application completely in its real time
Environment (Interacting with database, h/w, s/w, n/w and other s/m).
Exhaustive testing – Testing an application with various combinations of input and conditions.
Exploratory Testing – It is an approach in software testing with simultaneous learning, test design and test execution.
It is carried out if requirements and specifications are incomplete, or if there is lack of time.
Advantages: Less preparation is needed, important bugs are found fast.
Disadvantage: Can't be reviewed in advance, repeating exploratory tests, they will not be performed in the exact same manner.
Regression Testing – Testing carried out to conform that changes made during bug resolving doesn’t have any negative impact over the functionality or performance of an application.
Regression Testing types – Full regression – Testing entire application
Regional regression – Testing the particular module which had error.
Retesting – Testing the functionality of an application repeatedly with different inputs.
Redundant testing – Repeatedly testing an application with same inputs.
Reconstruction – Process of developing software from the beginning.
Re-engineering – Re-Engineering is the process of making alterations in an existing software.
Alpha Testing - The Alpha Testing is conducted at the developer sites and in a controlled environment by the end user of the software.
Beta Testing – Testing an application after installing it in the clients place.
Accessibility Testing – Testing whether the application is easily accessible by person with disabilities.
Usability Testing – Testing the user friendly ness of an application.
User acceptance testing - User acceptance testing is the final testing process carried out for determining whether the software is satisfactory to an end-user or customer.
Performance Testing – Testing how an application reacts or performs in a particular situation or condition (Load & Stress).
Load Testing – Testing the performance of an application by applying the load which is specified in the SRS.
Stress Testing – Performance testing carried out to check the stability and tolerance level of an application by giving heavy load which is beyond the limit specified in the SRS or executing the application in an unfavorable environment.
Scalability Testing – Testing the performance of an application by increasing the load gradually.
Soak Testing – Testing the performance of an application by running it under heavy load for a prolonged time.
Volume Testing - We can perform the Volume testing, where the system is subjected to large volume of data.
Endurance Testing - Checks for problems that may occur with prolonged execution.
Parallel Testing – Carried out to ensure that the function of new version is consistent (constant) with respect to the functions of previous version.
Mutation Testing – Testing carried out in order to check the completeness or effectiveness of the test cases or test data in finding error. It is carried out by making certain changes in the code (Bug) and re-executing the same test cases or test data.
Integration Testing – It is the testing process in which individual software modules are combined and tested as a group.
Types of integration Testing – Unit integration testing, Module integration testing & system integration testing.
Thread testing – Testing the interaction between groups of modules in a sequential way.
Loop Testing – It is a white box testing in which each and every loop in a program is executed.
Path Testing - Testing in which every paths in the program source code is tested at least once.
Stub – temporary replacement of a unit or module.
Driver – Temporary diversion (Divert through different path).
Decremental Testing – Testing from higher modules to lower modules.
Software Configuration management (SCM) – It is the set of procedures followed to manage an evolving software.
Version control – It is the process of managing and maintaining multiple revisions of the same file in an archive by accurately tracking and recording changes to project files.
Compatibility Testing – Testing how an application works along with elements (s/w, h/w & etc…) in an s/m (Suitability).
Concurrency Testing – Testing how an application performs when multi users tries to access the same application.
Conformation Testing – testing an application to prove that it satisfies all the requirements and meets the standards.
Conversion Testing - Testing of programs or procedures used to convert data from existing systems for use in replacement systems.
Comparison testing - comparing software weaknesses and strengths to
Competing products.
Dependency Testing – Testing the dependency of an application on the other s/w, initial states and configuration for proper functioning of the s/w.
Depth Testing – Testing each and every features of an application in depth.
Component testing – Testing each and every component in an application individually (Unit testing).
Dynamic Testing – Testing by executing an application.
Static Testing – Testing without executing (Review of application).
Reliability testing-Continuous run of the product for 2-3 days and computation of Reliability indicators like Mean Time between Failure (MTBF), MTTF (Main Time to Fail) and MTTR (Mean Time to recover).
Installation Testing – Testing an application by installing, Uninstalling and upgrading it in real time environment.
Recovery Testing – Checking the efficiency of an application to recover from expected and unexpected events.
GUI Audit-GUI Audit shall be conducted before beginning of testing and after finalization of the screens. This shall focus on usability, navigation between screens, data validations, data integrity conditions, field tests for date fields, numeric fields, text fields and consistency among screens.Installation/Uninstallation testing-Installation/Uninstallation testing shall cover full, partial or upgrade installation/uninstallation processes
User Manual based testing-User Manual based testing shall be done on the User Manual supplied by customer
Security Testing – Testing process carried out to confirm that the program prevents the unauthorized persons and makes the authorized personnel to access the functions available to their security level.
Structural testing – It is a testing technique where in the test case selection is based on the analysis of the internal structure of the component under test.
Advantages of test automation:
Speed
Reliability
Repeatability
Reusability
Speed
Reliability
Repeatability
Reusability
Test case format – Modification log, Table of content, Objective, Validation, Messages (Warning, Error, Confirmation)
Test case content – Test case Id, Description, Expected and actual value, Status, Tester name, last modified).
Bootstrapping - It is the process of using a special process to perform a task that one would be unable to do in general.
Bug life cycle – Various stages through which a bug pass from its birth to death (New, Assigned, Opened, Fixed, Reopen, Closed, Rejected, Pending, Postponed and Deferred - Refer to the printed notes).
Quality assurance - QA is defined as a procedure or set of procedures intended to ensure that a product or service under development (before work is complete, as opposed to afterwards) meets specified requirements.
Quality Control – It is the process of comparing the manufactured product with the existing standards to prove that the product meets the customer’s requirement completely. If there is any deviation necessary steps will be carried out to correct it.
Differentiate between verification and validation?
S no.
|
Verification
|
Validation
|
1.
|
Verification is a static testing procedure.
|
Validation is dynamic testing procedure.
|
2.
|
It involves verifying the requirements, detailed design documents, test plans, walkthroughs and inspections of various documents produced
during the development and testing process. |
Validation involves actual testing of the product as per the test
plan (unit test, integration test, system test and acceptance test etc). |
3.
|
It is a preventive procedure.
|
It is a corrective procedure.
|
4.
|
Are we building the product RIGHT?
|
Are we building the RIGHT product?
|
5.
|
It involves more then two to three persons and is a group activity.
|
It involves the testers and sometimes user.
|
6.
|
It is also called Human testing, since it involves finding the errors by persons participating in a review or walk through.
|
It is also called Computer testing, since errors are found out by testing the software on a computer.
|
7.
|
Verification occurs on Requirements, Design and code.
|
Validation occurs only on code and the executable application.
|
8.
|
Verification is made both in the Executable and Non Executable forms of a work product
|
Validation is done only on Executable forms of a work product.
|
9.
|
Verification finds errors early in the requirement & design phase and hence reduces the cost of errors.
|
Validation finds errors only during the testing stage and hence cost of errors reduced is less than Verification.
|
10.
|
An effective tool for verification tool is a Checklist.
|
Various manual and automated test tools are available for Validation.
|
11.
|
It requires cooperation and scheduling of meetings and discussions.
|
It is to check that the product satisfies the requirements and is accepted by the user.
|
12.
|
Verification tasks include:
1) Planning 2) Execution
|
Validation tasks include:
1) Planning 2) Test ware Development
3) Test Execution 4) Test ware Maintenance
|
13.
|
Verification activities include:
1) Requirements Verification
2) Functional design verification
3)Internal Design Verification
4) Code Verification
|
Validation activities include:
1) Unit testing 2) Usability testing
3) Function testing 4) System testing
5) Acceptance testing
|
14.
|
Verification deliverables (work products) are:
1) Verification test plan
2) Inspection report
3) Verification test report
|
Validation deliverables are:
1) Test plan 2) Test Design Specification
3) Test Case Specification
4) Test Procedure Specification
5) Test log 6) Test incident report
|
Types of Reviews:
1) In-Process Reviews
• Assess progress towards requirements
• During a specific period of the development cycle – like design period
• Limited to a segment of the product
• Used to find defects in the work product and the work process
• Catches defects early – where they are less costly to correct.
2. Phase-end Reviews/ Milestone Reviews or Decision-point
• Review of products and processes near the completion of each phase of Development
• Decisions for proceeding with development are based on cost, schedule, risk, progress, readiness for next phase
• Also referred to as Milestone Review
• Contains Requirements, Critical Design, Test Readiness and Phase-end Reviews
Software Requirements Review
• Requirements documented
• Baseline established
• Analysis areas identified
• Software development plan
• Test plan
• Configuration management plan derived
Critical Design Review
• Baselines the detailed design specification
• Test cases are reviewed and approved
• Usually, coding will begin at the close of this phase.
Test Readiness Reviews
• Performed when the appropriate modules are near completion
• Determines whether or not testing should progress based on a review of entrance and exit criteria
• Determines the readiness of the application/project for system and acceptance testing
Test Completion Reviews
• Determine the state of the software product
3. Post Implementation Reviews
• Also known as “Postmortems”
• Review/evaluation of the product that includes planned vs. actual development results and compliance with requirements
• Used for process improvement of software development
• Can be held up to three to six months after implementation
• Conducted in a formal format
Classes of Reviews:
• Informal (Also called peer-reviews)
o Generally one-on-one meeting between author of a work product and a peer
o Initiated as a request for input
o No agenda
o Results are not formally reported
o Occur as needed through out each phase
o Facilitated by the author
o Presentation is made with comment at the end
o Presentation is made with comment made throughout
o Issues raised are captured and published in a report distributed to participants
o Possible solutions for defects not discussed
o Occur one or more times during a phase
• Formal
o Facilitated by a moderator (not author)
o Moderator is assisted by A recorder
o Defects are recorded and assigned
o Meeting is planned
o Materials are distributed beforehand
o Participants are prepared- their preparedness dictates the effectiveness of the review
o Full participation by all members of the reviewing team is required
o A formal report captures issues raised and is distributed to participants and management
o Defects found are tracked through the defect tracking system and followed through to resolution
o Formal reviews may be held at any time
White box testing:
Carried out by the developers with knowledge about coding and internal logic.
White box testing Techniques
a) Statement Coverage
Statement/Line/Segment/Basic block coverage is a test case design technique in which test cases are designed to execute every statement in the Unit under Test. It executes all the statements at least once.
b) Decision Coverage
Decision/Branch coverage is a test case design technique in which test cases are designed to execute all the outcomes of every decision.
c) Condition Coverage
Execute all possible combinations of condition outcomes in each decision.
White Box testing:
It is the process of testing an application without knowing the internal structure and logic of the program.
White Box Techniques:
• Equivalence Partitioning
o A subset of data that is representative of a larger class
o For example, a program which edits credit limits within a given range ($10,000 - $15,000 would have 3 equivalence classes:
o Less than $10,000 (invalid)
o Between $10,000 and $15,000 (valid)
o Greater than $15,000 (invalid)
B) Boundary Analysis
A technique that consists of developing test cases and data that focus on the input and output boundaries of a given function.
In the same credit limit example, boundary analysis would test:
o Low boundary plus or minus one ($9,999 and $10,001)
o On the boundary ($10,000 and $15,000)
o Upper boundary plus or minus one ($14,999 and $15,001)
C) Error Guessing
Based on the theory that test cases can be developed based upon the intuition and experience of the Test Engineer.
For example, in an example where one of the inputs is the date, a test engineer might try February 29, 2000 or 9/9/99
TEST TYPES
|
Test types are tests that a test stage could include.
|
Functional testing
|
Testing of functional requirements, as specified in the Functional specification.
|
Process testing
|
Testing of a sequence of functions that make a work process.
|
Non-functional testing
|
Testing of non-functional requirements.
|
Performance testing
|
Testing conducted to evaluate the compliance of a system, a module or a unit with specified performance requirements. Could also be included as part of the non-functional testing
|
Stress testing
|
Testing conducted to evaluate a system or component at or beyond the limits of its specified requirements.
|
Interface testing
|
Testing of the system's interfaces to other systems, as specified in the Functional specification.
|
Integrity testing
|
Testing of data integrity including testing of related functionality not changed by the project to verify that it still works as expected after new functionality is implemented.
|
Other test types
|
The projects are free to define other test types as well.
|
TEST APPROACHES
|
Test approaches are different ways of carrying out a test stage. Test approaches could be combined.
|
Requirements based testing
|
All requirements should be tested.
|
Regression testing
|
Selective retesting of a system, a module or a unit to verify that modifications have not caused unintended effects and that the system, module or unit still complies with its specified requirements.
|
Criticality based testing
|
Selective testing based on the criticality.
|
Risk based testing
|
Selective testing based on criticality and risk (risk = consequence * probability).
|
Iterative testing
|
The test will be executed in iterations, e.g. the execution of the test will be re-started approximately every week and should be completed including correcting errors, within a week. The length of iteration might vary based on the test type and errors found.
|
Other test approaches
|
The projects are free to define other test approaches as well.
|
Test Case ID: It is unique number given to test case in order to be identified.
Test description: The description if test case you are going to test.
Revision history: Each test case has to have its revision history in order to know when and by whom it is created or modified.
Function to be tested: The name of function to be tested.
Environment: It tells in which environment you are testing.
Test Setup: Anything you need to set up outside of your application for example printers, network and so on.
Test Execution: It is detailed description of every step of execution.
Expected Results: The description of what you expect the function to do.
Actual Results: pass / failed If pass - What actually happen when you run the test. If failed - put in description of what you've observed.
Sample Test case
Here is a simple test case for applying bold formatting to a text.
- Test case ID: B 001
- Test Description: verify B - bold formatting to the text
- Revision History:
3/ 23/ 00 1.0- Valerie- Created - Function to be tested: B - bold formatting to the text
Environment: Win 98 - Test setup: N/A
- Test Execution:
- Open program
- Open new document
- Type any text
- Select the text to make bold.
- Click Bold
- Expected Result: Applies bold formatting to the text
- Actual Result: pass
What are the different types of Bugs we normally see in any of the Project? Include the severity as well.
1. User Interface Defects -------------------------------- Low
2. Boundary Related Defects ------------------------------- Medium
3. Error Handling Defects --------------------------------- Medium
4. Calculation Defects ------------------------------------ High
5. Improper Service Levels (Control flow defects) --------- High
6. Interpreting Data Defects ------------------------------ High
7. Race Conditions (Compatibility and Intersystem defects)- High
8. Load Conditions (Memory Leakages under load) ----------- High
9. Hardware Failures:-------------------------------------- High
2. Boundary Related Defects ------------------------------- Medium
3. Error Handling Defects --------------------------------- Medium
4. Calculation Defects ------------------------------------ High
5. Improper Service Levels (Control flow defects) --------- High
6. Interpreting Data Defects ------------------------------ High
7. Race Conditions (Compatibility and Intersystem defects)- High
8. Load Conditions (Memory Leakages under load) ----------- High
9. Hardware Failures:-------------------------------------- High