Saturday, May 3, 2014

Basic Manual Testing Concepts

                    Manual Testing Concepts


Technical Factors:
Meet customer requirements in terms of Functionality
Meet customer expectations in terms of Performance, Usability, Security, etc…
Non-Technical Factors:
Reasonable cost to purchase.
Time to release.

3.1 Software Quality Assurance (SQA):

The Monitoring and Measuring the strength of development process is called Software Quality Testing. Ex: Life Cycle Testing.

3.2 Software Quality Control (SQC):

The validation of final product before releasing to the customer is called as SQC.

BRS: Business requirement specification defines the requirement the customer to be developed as software. This document is also known as Customer Requirement Specification (CRS) or User Requirement Specification (URS).
SRS: Software requirement specification defines functional requirements to be developed and system requirement to be used (Hardware and Software).
Example: BRS defines addition (Customer requirement). SRS defines how to solve customer requirement.
Review: It is a static testing technique. In this review responsible people will estimate completeness and correctness of corresponding documents.
HLD: High Level Design document defines the overall architecture of the system from root functionalities to leaf functionalities. This HLD is also known as Architectural Design or External Design.
LLD: Low Level Design document defines the internal logic of corresponding module (or) functionality. The LLD is also known as Internal Logic Design document.
Prototype: A sample model of an application without functionality is called prototype.
Program : A set of execute statements is called a Program.
Module : A set of programs is called as a Module or Unit.
Build : The set of modules is called as Software Build or Product.
White Box Testing: It is a coding level testing technique to verify the completeness and correctness of program structure. Programmers will follow this technique. It is also known as Glass Box Testing (or) Clear Box Testing (or) Open Box Testing.
Black Box Testing: It is a build level testing technique. In this testing test engineers will validate every feature depending on external interface.
Software Testing: The Verification and Validation of a software application is called software testing.
Verification: Are we building the product right?
Validation: Are we building the right product?

3.4 Unit Testing

After completion of design and their reviews, programmers will concentrate on coding to construct software physically. In this phase programmers will test every program through a set of white box testing techniques w.r.t LLD.
1. Basis paths testing.
2. Control structure testing.
3. Program technique testing (Time).
4. Mutation Testing

3.4.1 Basis Path Testing

In this coverage programmers will verify the execution of program without any syntax and run time errors. In this coverage programmers will execute a program more than one time to cover all areas of that program coding while running.

3.4.2 Control Structure Testing

In this coverage programmers will concentrate on correctness of the program functionality. In this coverage programmers will check statements in the program including variables declaration, IF conditions, Loops, etc….

3.4.3 Program Technique Coverage

In this coverage programmers will verify the execution time of program to improve speed in processing. If the execution time is not reasonable then the programmers will change the structure of the program without disturbing functionality.

3.4.4 Mutation Testing

After completion of a program testing, the corresponding programs will review the completeness and correctness of the program testing. Mutation means that a change in coding of the program, in this Mutation testing programmers will perform changes in various areas in the program and repeat previously completed tests. If all the tests are passed on the changed program, then the program will continue testing on some program. If any one of the tests is failed on the change in program, then the program will concentrate on further coding.
Note: in white box Testing techniques, the first 3 techniques will test program code and the mutation testing will estimate the completeness and correctness of the test on the program.

3.5 Integration Testing

After completion of dependent programs development and unit testing, programmers will inter connect the programs to construct a complete software build. In this stage programmers will verify integration of programs in four types of approaches.
a. Top Down Approach.
b. Bottom Up Approach.
c. Hybrid Approach.
d. System Approach.

3.5.1 Top Down Approach

In this approach the programmers will inter connect main models to some of the modules in the place of remaining sub modules programmers will use temporary programs called Stubs.

3.5.2 Bottom up Approach

In this approach the programmers will inter connect sub modules without connection to the main module. Programmers will use a temporary program instead of main module called Driver.

3.5.3 Hybrid Approach

It is a combined approach of Top Down and Bottom Up approaches. This approach is also known as Sandwich approach.

3.5.4 System Approach

It is also known as Final Integration (or) Big Bang Approach. In this integration programmers will inter connect programs after completion of total development.
Note: In general the programmers will inter connect programs through any one of the above methods depending on circumstances.

3.6 System Testing

After completion of Integration Testing, and receiving the build from development team, the testing team will concentrate on system testing to conduct using Black Box Testing techniques.
System Testing is divided into 3 sub stages.
1. Usability Testing.
2. Functional Testing
3. Non-Functional Testing.

3.6.1 Usability Testing

After receiving software build from development team, the testing team will conduct usability testing. In this test the testing team will estimate “User Friendly ness” of all screens in the software build. There are two sub tests.

3.6.1.1 User Interface Testing or UI Testing

In this test, the testing team will apply below 3 factors on every screen of the software build.
· Ease of use: To estimate understandability of screen.
· Look and Feel: To estimate attractiveness of screen.
· Speed in Interface: To estimate length of navigation as short.

3.6.1.2 Manual Support Testing

During this test the testing team will validate the correctness and completeness of help documents. These help documents are also known as User Manuals.


3.6.2 Functional Testing

It is a mandatory testing level in testing team responsibilities. During this test, testing team will concentrate on “Meet customer Requirements” through below sub tests.
a. Requirement Testing.
b. Sanitation Testing.

3.6.2.1 Requirements Testing

It is also known as Functionality Testing. During this test the responsible testing team will apply different coverage techniques as discussed below on the functionalities of software build.
· GUI Coverage / Behavioral Coverage: Changes in properties of objects in screens while operating.
· Error Handling Coverage: To prevent wrong operation on screens.
· Input Domain Coverage: Testing correct type and size of input values
· Manipulations Coverage: Returning correct output values.
· Back End Coverage: Valid impact of screens operations on back end data base tables.
· Functionalities Order Coverage: The arrangements of screens in the software build with respect to order of functionalities.

3.6.2.2 Sanitation Testing

During this test the testing team will concentrate on extra functionalities with respect to requirements of the customer. This testing is also known as garbage testing.

3.6.3 Non-Functional Testing

After completion of user interface and functional testing, the testing team will concentrate on Non-Functional Testing to validate quality characteristics of software build Like Security and Performance.

3.6.3.1 Recovery Testing

This testing is also known as Reliability Testing. During this test, the testing team will validate that whether the software build is changing from abnormal state to normal state.

3.6.3.2 Compatibility Testing

It is also known as Portability Testing. During this test, the testing team will validate that whether the software build is running on the customer expected platforms or not?.

3.6.3.3 Configuration Testing

It is also known as hardware compatibility testing. During this test the testing team will validate that whether the software build is supporting different technology hardware devices or not?
Example: Different technology printers.
Different topology networks, etc….

3.6.3.4 Inter Systems Testing

It is also known as End-to-End Testing. During this test the testing team will validate that whether the software build co-exists with other software applications to share common resources.
Example: Sharing data, sharing hardware devices, printers, speakers, sharing memory, etc….

3.6.3.5 Installation Testing

During this test, the testing team will establish customer site like configured environment. The testing team is practice installation of software build in to that environment.

3.6.3.6 Load Testing

The execution of the software build under customer expected configuration and customer expected load to estimate speed of processing is called as load testing. Here, load means that the no of concurrent users working on the software. This is also known as scalability testing.

3.6.3.7 Stress Testing

The execution of the software build under customer expected configuration and various load levels from low to peak is called stress testing. In this testing, testing team will concentrate on load handling by the software build.

3.6.3.8 Storage Testing

Testing whether the system meets its specified storage objectives.
Testing the data of different formats and in different devices. Verifying the efficiency of data storage in devices and proper retrieval of the data.

3.6.3.9 Data Volume Testing

Volume testing refers to testing a software application with a certain amount of data. This amount can, in generic terms, be the database size or it could also be the size of an interface file that is the subject of volume testing. For example, if you want to volume test your application with a specific database size, you will expand your database to that size and then test the application’s performance on it.
Example: MS Access technology support 2GB of database as maximum.

3.6.3.10 Parallel Testing

It is also known as Comparative Testing. During this test, the testing team will compare the software build with other competitive software in market or with old version of same software build to estimate completeness. This is applicable only for software product but not on software applications.

3.7 User Acceptance Testing (UAT)

After completion of all possible Functional and Non-Functional tests, the project manager will concentrate on user acceptance testing to collect feedback from customer site people. There are two approaches to conduct UAT, such as a-Test and b-Test.
a-Test
b-Test
1. Software applications.
2. At development site.
3. By real customers.
1. Software products.
2. In customer site like environment.
3. By customers site like people.
Collect Feedback

3.8 Testing during maintenance

After completion of user acceptance test and their modifications, project management concentrate on release team formation with few developers, few testers and few hardware engineers. This release team is coming to customers site and conduct port testing.
During this port testing release team concentrate on below factors in customers site.
· Compact installation
· Overall functionality
· Input devices handling
· Output devices handling
· Secondary storage devices handling
· Co-existence with other software to share common resources
· Operating system error handling
After completion of port testing, release team provides training sessions to customer site people.
During utilization of the software, customer site people are sending change request to our organization. There are two types of change request to be solved.

Monkey Testing
A Test Engineer conducting a test on application build through the coverage of main activities only is called monkey testing or chimpanzee testing.
Exploratory Testing
A Tester conducts testing on an application build through the coverage of activities in level by level.
Ad-Hoc Testing
A Tester conducts a test on application build with respect to pre determined ideas is called ad-hoc testing.
Bigbang Testing
An organization conducting a single stage of testing after completion of entire modules development is called big bang testing or informal testing.
Incremental Testing
An organization follows the multiple stages of testing process from document level to system level is called incremental testing or formal testing.
Example: LCT (life cycle testing).
Sanity Testing
Whether the build released by the development team is stable for complete testing to be applied or not?
This observation is called sanity testing or tester acceptance testing (TAT) or build verification testing (BVT).
Smoke Testing
An extra shake-up in sanity testing is called smoke testing. In this phase test engineer will try to find the reason when the build is not working before start working.
Static versus Dynamic Testing
A tester conduct a test on application build without running during testing is called static testing.
Example: Usability, Alignment, Font, Style …..Etc.
A tester conduct a test through the execution of our application build is called dynamic testing.
Example: Functional, Performance and Security Testing.
Manual Vs Automation Testing
A Test Engineer conducts a test on application build without using any third party testing tool is called Manual Testing.
A Tester conducts a test on application build with the help of a testing tool is called test Automation Testing.

A Test impact indicates test repetition with multiple test data.
Example: functionality testing.
A Test criticality indicates that complexity to execute the test manually.
Example: load testing.
Re-Testing
The re-execution of a test on same application build with multiple test data is called re-testing.
Ex: multiple

Expected: Result = Input 1 * Input 2
Regression Testing
The re-execution of selected test on modified build to ensure bug fix work and occurrences of side effects is called regression testing.

Error : A mistake in coding is called Error.
Defect: A test engineer found mismatch due to mistakes in coding during testing is called
Defect or issue.
Bug : A defect accepted by developers to be solved is called Bug.

5.1 Test Policy

It is a company level document and developed by quality control people. (almost management) this document defines “testing objective” to be achieved.
Address of company
Location of company
Testing Definition: Verification + Validation.
Testing Process: Proper planning before starts testing.
Testing Standard: 1 defect per 250 loc/1defect per 10fp.
Testing Measurements: QAM, TMM, PCM.
Signature of
(C.E.O)
LOC – Lines of code.
FP – Functional point.
Example: No of screens / No of forms / No. of reports / No. of inputs / No. of outputs / No. of queries.
Using functional point we can know the size of project.
QAM - Quality Assessment Measurements.
TMM - Test Management Measurements.
PCM - Process Capability Measurements.

5.2 Test Strategy

It is also a company level document and developed by quality analyst people (project manager level). This strategy document defines testing approach to be followed by testing team.
Scope and objective:
The purpose of testing in our organization.
Business issues

Budget control for testing

Ex:
Testing approach
mapping between development stages and testing issues. Ex: v-model
Development Stages
Testing
Issues
Information gathering & Analysis
Design
Coding
System Testing
Maintenance
Ease of Use
Authorization
.
.
X
X
Depends on change request
Test Responsibility Matrix (TRM) / Test Matrix (TM)

Test deliverables
names of testing documents to be prepared by testing team during every project testing.
Roles and responsibilities
names of jobs in testing team and responsibilities of every job during testing.
Communication and status reporting
required negotiation between every two consecutive jobs in testing team.
Test automation and testing tool
purpose of automation and availability of testing tools in your organization.
Defect reporting and tracking
required negotiation between testing team and development team when testers got mismatches in testing.
Testing measurements and metrics
QAM, TMM, PCM
Risks and mitigations
expected failures raised during testing and solutions to overcome.
Change and configuration management
how to handle sudden changes in customer requirements during testing.
Training plan
required number of sessions to understand customer requirements by testing team.
TEST FACTORS OR TESTING ISSUES
To define one quality software. Software engineering people are using 15 factors/issues.
Authorization
whether a user is valid or not? To connect to application.
Access control
whether a valid user have permissions to use specific service or not?
Audit trail
Maintains Meta data about operations.
Continuity of processing
integration of modules.
Correctness
Meet customer requirements in terms of functionality.
Coupling
Co-existence with other software applications to share common resources.
Ease of use
user friendliness of screens.
Ease of operate
Installation, uninstallation, dumping, down loading, up loading,—-etc.
File integrity
creation of back up during operations.
Reliability
recover from abnormal situations.
Portable
run on different platforms.
Performance
speed of processing.
Service levels
order of functionalities.
Maintainable
whether our application build is long time serviceable to customer site people or not?
Methodology
whether our testing team is following standards or not? ( during testing)


TEST FACTORS VS TESTING TECHNIQUES
Authorization:
Security testing (separate testing team)
Functionality/ requirement testing (common testing team)
Access control:
Security testing (separate testing team)
Functionality/ requirement testing (common testing team)
Audit trail
functionality/ requirement testing
Continuity of processing
integration testing (top down/ bottom up/ hybrid)
Correctness
functionality testing
Coupling
intersystem testing
Ease of use
user interface testing , manual support testing
Ease of operate
installation testing
File integrity
functionality/ requirement testing
Recovery testing
Reliability
recovery testing (1 user level)
Stress testing (peak load)
Portable
compatibility testing
Configuration testing
Performance
load testing, stress testing, storage testing, data volume testing.
Service levels
functionality/ requirements testing (1 user level)
Stress testing (peak load level)
Maintainable
compliance testing
methodology
compliance testing
Compliance testing
whether the testing team is following standards or not during testing, is called compliance testing. Compliance means that complete plan.

5.3 Test Methodology

It is a project level document. This document defines required testing approach for corresponding project testing. project manager like people are developing test methodology depending on company level test strategy. Due to this reason, test methodology is also known as a refinement form of test strategy.
To develop test methodology, project manager/quality analyst follows below approach. Before start every project testing.
Step 1 :- collect test strategy.
Step 2 :- identify current project type.
Project Type
Information Gathering &
Analysis
Design
Coding
System Testing
Maintenance
Traditional
Outsourcing
Maintenance
Note:- depending on project type, project manager delete some of the columns from TRM (test responsibility matrix) for this project testing.
Step 3 :- study project requirements.
Note: – depending on requirements in the project, PM delete unwanted factors (rows) from TRM for this project testing.
Step 4: – determine the scope of project requirements.
Note: – depending on expected future enhancements, PM is adding some of previously deleted factors to TRM for this project testing.
Step 5: – identify tactical risks.
Note: – depending on analyzed risks, PM is deleting some of the factors from selected TRM for this project testing.
CASE STUDY: 15 ç Test factors
-3 ç Requirements
——
12
+1 ç Scope of requirements
——
13
-4 ç Risks
——
ç Finalized to be applied on project
step 6 :- finalized TRM for current project testing.
Step 7 :- prepare system test plan.
Step 8 :- prepare module test plan.

TESTING PROCESS:

--------------------------------------------------------------------------------

PET PROCESS
This process developed by HCL chennai it is also a refinement form of v-model. This process defines mapping between development process and testing process. From this process model, organizations are maintaining separate testing team for functional and system testing. The remaining stages of testing are done by developers.
PET stands for process experts tools and technology.

5.4 Test Plan

After completion of test initiation and testing process finalization, test lead category people are concentrating on test plan document preparation with “what to test?”, “how to test?” , “when to test?” , “who to test?” . in this test plan document preparation test lead is following below approach.

Testing team formation: in general test planning process starts with testing team formation. In this stage test lead is depending on below factors.
è Availability of test engineers
è Test duration the 3 factors are dependent
è Availability of test environment resources
Case study:
Test Duration: C/S, Web, ERP è 3 to 5 months of system testing
System s/w è 7 to 9 months of system testing
Machine critical è 12 to 15 months of system testing
Team Size: Developers : Testers = 3 : 1
Identify Tactical Risks: After formation of testing team, test lead is analyzing selected team level risks. This risk analysis is also known as Root Cause Analysis.
Ex: Risk 1 : lack of knowledge of testing team on that domain.
Risk 2 : lack of budget (time).
Risk 3 : lack of resources (testing tools not available)
Risk 4 : lack of test data (improper documents)
Risk 5 : delays in delivery
Risk 6 : lack of development process rigor
Risk 7 : lack of communication (in b/w testing team to development team)
Prepare test plan: after completion of testing team formation and risks analysis, test lead concentrate on test plan document preparation in IEEE format (Institute of Electrical and Electronics Engineers).
Format:
Test plan ID: unique number / name.
Introduction: about project
Test items: names of all modules in that project ex: website.
Features to be tested: new module names for test design. (What to test)
Features not to be tested: which ones and why not? (Copy test cases from server)
Approach: selected list of testing techniques by project manager to be applied on above modules,(finalized TRM)
Testing tasks: necessary operations to do before start every module testing.
Suspension criteria: possible raised problems during above modules testing.
Ex: exception handling.
Feature pass/fail criteria: when a module is pass and when a module is fail.
Test environment: required hardware’s and software’s to conduct testing on above modulus. Ex: WinRunner
Test deliverables: names of testing documents to be prepared during above modulus
EX: test cases, test procedures, test scripts, test log, defect reports for every modules.
Staff and training needs: names of selected test engineers for this project testing
Responsibilities: mapping between names of test engineers and names of modules.
(Work allocation)
Schedule: dates and times.
Risks and mitigations: raised problems during testing and solutions to overcome.
Approvers: signatures of project manager and test lead.
Review test plan
After completion of first copy of test plan document development, test lead conducts a review on that document for completeness and correctness. In this review meeting test lead concentrate on coverage analysis.
Coverage analysis:
è Business requirement based coverage (what to test?)
è TRM based coverage (how to test?)
è Risks based coverage (when & who to test?)
After finalizations of test plan, test lead is providing some training sessions to selected testing team on project requirements.

5.5 Test Design

After finalization of test plan and after completion of training sessions, test engineers are concentrating on test cases development for responsible modules. There are three methods to prepare test cases such as:
· Business logic based test case design (depending on srs)
· Input domain based test case design (design documents)
· User interface based test case design.
Business logic based test case design
In general, test engineers are preparing maximum test cases depending on use cases in srs. Every use case is describing functionality in terms of inputs, process and outputs.

From the above model test engineers are preparing test cases depending on that use case. Every use case is also known as functional specification. Every test case describes a testable condition to be applied on build.
To study use cases, test engineers are following below approach.
Step1: collect required use cases for responsible modules.
Step2: selecting a use case and their dependencies from above collected list of use case.
Step2.1: identify entry condition (base state)
Step2.2: identify input required (test data)
Step2.3: identify output and out come (expected)

Step2.4: study normal flow (navigation)
Step2.5: study end condition (end state)
Step2.6: study alternative flow and exceptions
Step3: prepare test cases depending on above study of use cases
Step4: review the test cases for completeness and correctness
Step5: go to step2 until all use cases study completion
Test case format
during test design test engineers are preparing test cases in IEEE format. Through these formats test engineers are documenting every test case.
Format:
Test case id: unique number/name
Test case name: the name of test conditions.
Feature to be tested: corresponding module or function name.
Test suit id: the corresponding batch id, in that batch this case is also member.
Priority: the importance of test case in terms of functionality.
EX: P0—basic functionality (requirements)
P1—general functionality (recovery, Compatability, inter systems, load—)
P2—cosmetic functionality (user inter face)
Test environment: required hard wares and soft wares including testing tool to execute this test case.
Test efforts: (person/hour) time to execute this test case .
EX: 20 min average time
Test duration: Date and time
Test setup: necessary tasks to do before start this case execution
Test procedure: this step-by-step procedure from base state to end state

Test case pass/fail criteria: when this case is pass/ when this case is fail
NOTE: in general, test engineers are not maintaining complete format for every test case. They can try to maintain test procedure as manitary for every test case.
Input domain based test case design
in general, test engineers are preparing test cases depending on use cases or functional specifications in srs. Some times they can go to depending on design documents also. Because, use cases are not providing complete information about size and type of input objects. Due to this reason, test engineers are studying data models in design documents.
EX: ER-diagrams (entity relation ship diagrams)
In this study, test engineers are following below approach
Step1: collect data models of responsible modules from design documents
Ex: ER-diagrams
Step2: study every input attribute in terms of size and type with constraints
Step3: prepare BVA and ECP for every input attribute in below format
I/P Attribute
ECP
BVA (Size/Range)
Valid
Invalid
Min
Max
This table is called DATA MATRIX. This table is providing information about every object
Step4: identify critical and non-critical inputs in above list
Ex:


Critical inputs are involving in internal manipulations. Non-critical inputs used for printing purpose.
NOTE: If our test case is covering an operation, then test engineers are preparing step-by-step procedure from base state to end state. If our test case is covering an object, then test engineers are preparing data matrix.
User interface based test case design
To conduct usability testing, test engineers are preparing test cases depending on global user inter face convection, our organization rules and interest of customer site people.
Example test cases:
Test case1: spelling check
Test case2: graphics check (alignment, font, style, color and other micro soft six rules)
Test case3: meaning full error messages
Test case4: accuracy of data display
Test case5: accuracy of data in the database as a result of user input
Test case6: accuracy of data in the database as a result of external factors
Ex: file attachment, export files, import files etc
Test case7: meaning full help messages
NOTE: test case1 to test case6 are indicating user inter face testing and test case7 is indicating manuals support testing.
Test design review
Before receiving build from development team to start test execution, test lead is analyzing the completeness and correctness of prepared test cases by test engineers through a review meeting. In this review test lead is depending on coverage analysis.
Business requirement based coverage
—Use cases based coverage
—Data model based coverage
—User inter face based coverage
—Test responsibility matrix based coverage
At the end of this review, test lead is preparing requirements trace ability matrix (RTM). This matrix defines mapping between customer requirements and prepare test cases. This matrix is also known as requirements validation matrix (RVM).

5.6 Test Execution

after completion of test design and their reviews, testing team is receiving initial build from development team to start test execution.
Levels of test execution:


Level –0 testing on initial build
Level –1 testing on stable build
Level –2 testing on modified stable build
Level –3 testing on master build (ready to release)
Levels of Test Execution Vs Test Cases:
Level –0 è Initial build è All Po test cases (Basic functionality).
Level –1 è Stable build è All Po, P1, and p2 Test cases as test batches.
Level –2 è Modified build è Selected Po, P1, and P2 test cases w.r.t modifications.
Level –3 è Master build è Selected Po, P1, and P2 test cases w.r.t bug density.
Build version control
In general, testing team is receiving build from development team. With the help of existing network protocols.


From the above model, testing team is receiving build from development team through FTP. To access soft base in network server. Soft base in server consists of old builds and modified builds; development people are assigning unique version number to that builds. This version numbering system is under stand able to testing team. For this build version controlling, development people are using version control tools.
Ex: vss (visual source safe)
Test harness: test harness means that readiness to start test execution.

Level-0 (sanity testing): after receiving initial build from development team, testing team is concentrating on sanity testing to estimate stability of that build to be applied complete testing. In this preliminary testing, testing team concentrates on basic functionality of that build. In this functionality coverage, testing team concentrate on below factors.
è Under stand ability
è Operatability
è Absorbability
è Controllability Testability
è Consistency
è Simplicity
è Maintainability
è Automat ability
From the above 8 factors, sanity testing is also known as testability testing or octangle testing, tester acceptance testing and build verification testing.
Test automation if possible: After receiving stable build from development team, test engineers are creating automated test scripts with required checkpoints, If possible all test cases are not automat able. Due to this reason test engineers are making automation test scripts for repeatable and complex test cases.

Case study: In general, testing teams are following selective automation only. In this selective automation test engineers are creating test scripts using testing tools for functionality or requirements test cases and load/stress test cases.
Test Execution Type
Testing Techniques
Testing Tools
Comments
Manual
UI testing manuals support testing
Easy to conduct
Manual / automation
Functionality testing
WR, QTP, Robot, Silk Test
Basic functionality testing is repeatable
Manual
Recovery, Compatability, Configuration, Inter systems, Installation, Sanitation and Parallel Testing
No tools in market
Manual / with Automation
Load and Stress Testing
Load Runner, SQA load Test, Silk Performer, J meter
Expensive manually and complex to conduct
Manual
Storage and data volume testing security testing
No tools in market for this type of testing.
Level-1 (comprehensive testing): After receiving stable build and after completion of all possible automation, testing team arrange test cases as batches. Every test batch consists of the set of dependent test cases. These test batches are also known as test suit or test set. During these test batches execution, test engineers are preparing test log documents. This test log document consists of three types of entries.
–Passed, all expected values are equal to actual values
–Failed, any one expected value vary with actual
–Blocked, postponed due to in correct parent functionality

Level-2 (regression testing): During level-1/comprehensive testing, test engineers are reporting mismatches to development team. After receiving modified build from development team, test engineers are concentrating on regression testing, test engineers are following below approach with respect to seriousness of that mismatches.

Case1: If the development team resolved bug severity is high, then test engineers are re-executing all p0, all p1 and carefully selected p2 test cases on that modified build with respect to modifications.
Case2: If the development team resolved bug severity is medium, then test engineers are re-executing all p0, carefully selected p1 and some of p2 test cases with respect to modifications.
Case3: If the development team resolved bug severity is low then test engineers are re-executing some of p0, p1 and p2 test cases with respect to modifications.
Case4: If the development team released modified build due to sudden changes in customer requirements then test engineers are re-executing all p0, all p1 and carefully selected p2 test cases with respect to that change in the requirements.
9.Error, Defect, Bug: A mistake in coding is called error.
Coding errors found by testing team during testing called defect or issues.
Testing team reported issues accepted by development team to be solved, called bug.

5.7 Test Reporting Or Defect Tracking

During level-1 and level-2 test execution, test engineers are reporting mismatches to development team in IEEE format.
Format:
defect id: unique number and name.
description: summary of defect
build version id: version number of build, in which test engineers found this defect
feature: the corresponding module name, in which test engineer found this defect
test case name: the name of test condition, during this case execution, test engineer found this defect
reproducable: yes, means defect appears every time in test execution No, means defect appears rarely in test execution
if yes, attach test procedure
if no, attach snap shot and strong reasons
fonud by: the name of test engineer
detected on: date of submission
assigned to: the responsible person at development side to receive this defect
status: New – Re-reporting defect or Reopen – Reporting defect first time
severity: The seriousness of defect in terms of functionality
High – Not able to continue test execution with out resolving that defect
Medium – Able to continue remaining testing but compulsory to solve
Low – Able to continue remaining testing but optional to resolve (may/may not)
priority: the importance of this defect in terms of customer
suggested fix (optional): expected possibilities to resolve this defect by developers
fixed by: project manager/project lead
resolved by: programmer name
resolved on: date of resolving
resolution type:
approved by: signature of project manager
NOTE: in above format development people try to change priority of defect with respect to customer importance
Defect age: The time gap between defect reported date and defect resolved date is called defect age
Defect submission:

Defect resolution type: During test execution, test engineers are reporting mismatches to development team as defects. After receiving defect reporting from testing team, development people are conducting bug-fixing review and they will send resolution type report to corresponding testing team. There are 12 types of resolutions to report to testing team.
duplicate: Rejected due to this defect equal to previously reported defect.
enhancement: rejected due to this defect related to future requirement of the customer
hard ware limitation: rejected due to this defect related to limitations of hard ware devices
soft ware limitation: rejected due to this defect related to limitations of soft ware technologies
not applicable: rejected due to improper meaning of defect
functions as designed: rejected due to coding is correct with respect to design logic
need more information: not accepted and not rejected but developer’s required extra information to under stand the defect
not reproducible: not accepted and not rejected but developer’s required correct producer to reproduce that defect
no plan to fix it: not accepted and not rejected but developer’s required extra time to fix
fixed: accepted and ready to resolve
fixed indirectly (deferred): accepted but postponed to future version
user misunder standing: extra negotiation between developers and tester
TYPES OF BUGS: During test execution either in manual or in automation, test engineers are finding below types of bugs.
users inter face bugs: (low severity)
Ex1: spelling mistake (high priority)
Ex2: improper right alignment (low priority)
Error handling bugs: (medium severity)
Ex1: does not return error message (high priority)
Ex2: complex meaning in error message (low priority)
Input domain bugs: (medium severity)
Ex1: allows in valid inputs (high priority)
Ex2: allows in valid type also (low priority)
Calculations bugs: (high severity)
Ex1: dependent out puts are wrong (application show stopper) (high priority)
Ex2: find out put is wrong (module show stopper) (low priority)
Race condition bugs: (high severity)
Ex1: deadlock or hang (application show stopper) (high priority)
Ex2: improper order of functionalities (low priority)
Load condition bugs: (high severity)
Ex1: does not allows multiple users (high priority)
Ex2: does not allows customer expected load (low priority)
Hard ware bugs: (high severity)
Ex1: not able to establish connection to hard ware device (high priority)
Ex2: wrong out put from device (low priority)
Version control bugs: (medium severity)
Ex1: mis matches in between two consecutive build versions
ID-control bugs: (medium severity)
Ex: wrong logo, logo missing, copy right window missing, wrong version number, soft ware title mistake, team members names missing——etc.
Source bugs: (medium severity)
Ex: mistakes in help documents

TEST CLOSER:
 After completion of all possible test execution and bugs resolving, test lead concentrate on test closer to stop testing process. In this review test lead is depending on below factors.
Coverage analysis:
Business requirements based coverage
Use cases based coverage
Data model based coverage
In put domain based coverage
User inter face based coverage
Test responsibilities matrix based coverage
Bug density:
Ex: A – 20%
B – 20%
C – 40% ç Need for Regression
D – 20%
100%
1. Analysis of deferred bug: whether the deferred bugs are postponable or not?
At the end of this review meeting, test lead can go to select high bug density module in our application build for final regression (level-3)

Above approach is also known as level-3 testing (or) final regression testing (or) pre-acceptance testing (or) post mortem testing
After completion of this final regression, testing team concentrate on user acceptance with the help of real customers (or) model customers.
USER ACCEPTANCE TESTING: After completion of test closer test management is concentrating on user acceptance testing to collect feedback from customer site people. There are two approaches to conduct user acceptance testing such as
SIGN OFF: After completion of user acceptance testing and their modifications, test lead is preparing final test summary report. It is a part in “soft ware release note”(SRN). This final test summary report consists of below documents as members.
· Test methodology
· Test plan
· Requirements trace ability matrix
· Automated test scripts
· Final bugs summary report
Bug description
Found by
Feature
Severity
Status (closed/deferred)
Comments
è Final test summary report (FSTR)
Case study: (five months of testing process)
Deliverable
Responsibility
Completion time
Test Cases Preparation
Test Cases Review
Requirements Traceability Matrix
Test Automation
Test Execution (level-1 & Level-2)
Defect Reporting
Communication and Status
Reporting
Test Closer and Final Regression (Level-3)
User Acceptance Test
Sign Off
Test Engineers
Test Lead & Test Engineers
Test Lead
Test Engineers
Test Engineers
Test Engineer, Test Lead
Test Lead
Test Lead & Test Engineer
Customer site people including Testing Team
Test Lead
15-20 days
4-5 days
1-2 days
10-15 days
40-60 days
On going
Weekly twice
4-5 days
4-5 days
2-3 days
Auditing: During testing and maintaince of soft wares project and test management people are using three types of measurements and metric such as
(1) Quality assessment measurement (QAM)
(2) Test management measurement (TMM)
(3) Process capability measurement (PCM)
Quality assessment measurement: (QAM) During soft ware testing process, quality analyst or project manager is using these measurements to estimate quality assurance level in that testing process. (Monthly once)
è Stability:
Duration
No of Bugs
20%
80%
80%
20%
è Sufficiency:
· Requirements coverage (modules)
· Type-trigger analysis (what type of test completed)
Defect severity distribution: organization trend limit check.
Test management measurements: During a project testing process, test lead category people are using these measurements to estimate testing process coverage. (weekly twice)
èTest status:
· Completed cases execution
· In execution
· Yet to execute
è Delays in delivery:
· Bug arrival rate
· Bug resolution rate
· Bug ageing
è Test efficiency:
· Cost to find a bug
(Ex: 5 bugs/person-day)
Process capability measurements: During soft ware maintenance in customer site, QA and PM is using these measurements to improve testing team capability. (Yearly once)
è Defect escapes: (missed bugs by testing team)
a. Type- phase testing
b. Type- trigger analysis
c. Defect resolution rate (or) defect removal efficiency
i. DRE =A/(A+B)
1. Bugs found by testing team
2. Bugs found by customer during maintenance



Important:
-------------------

1)Difference between Regression Testing vs Retesting?


Re-Testing: After a defect is detected and fixed, the software should be retested to confirm that the original defect has been successfully removed. This is called Confirmation Testing or Re-Testing
Regression testing:  Testing your software application when it undergoes a code change to ensure that the new code has not affected other parts of the software.
Let’s quickly start with actual difference between Regression Testing andRetesting.

Regression Testing

Retesting

Regression testing is a type of software testing that intends to ensure that changes like defect fixes or enhancements to the module or application have not affecting unchanged part.Retesting is done to make sure that the tests cases which failed in last execution are passing after the defects against those failures are fixed.
Regression testing is not carried out on specific defect fixes. It is planned as specific area or full regression testing.Retesting is carried out based on the defect fixes.
In Regression testing, you can include the test cases which passed earlier. We can say that check the functionality which was working earlier.In Retesting, you can include the test cases which failed earlier. We can say that check the functionality which was failed in earlier build.
Regression test cases we use are derived from the functional specification, the user manuals, user tutorials, and defect reports in relation to corrected problems.Test cases for Retesting cannot be prepared before start testing. In Retesting only re-execute the test cases failed in the prior execution.
Automation is the key for regression testing.
Manual regression testing tends to get more expensive with each new release.
Regression testing is right time to start automating test cases.
You cannot automate the test cases for Retesting.
Defect verification is not comes under Regression testing.Defect verification is comes under Retesting.
Based on the availability of resources the Regression testing can be carried out parallel with Retesting.Priority of Retesting over Regression testing is higher, so it is carried out before regression testing.

In the Regression Testing test cases are extracted from functional test cases to ensure that no new defects should be included & check whether original features and functionality is working as expected and make sure no new defect has been introduced. Once the regression test suite is created you can automate test cases using automation tool but same is not applicable for Retesting.

Conclusion

The defect logged by tester while testing application and same fixed by developer. In Retesting we check same defect whether fixed or not using steps to reproduce mentioned in the defect. In Regression testing we check same defect fixes are not impacted other unchanged part of the application, not breaking the functionality working previously and break due to fixing defect.

2)Smoke Testing Vs Sanity Testing

SMOKE TESTING:
  • Smoke testing originated in the hardware testing practice of turning on a new piece of hardware for the first time and considering it a success if it does not catch fire and smoke. In software industry, smoke testing is a shallow and wide approach whereby all areas of the application without getting into too deep, is tested.
  • A smoke test is scripted, either using a written set of tests or an automated test
  • A Smoke test is designed to touch every part of the application in a cursory way. It’s shallow and wide.
  • Smoke testing is conducted to ensure whether the most crucial functions of a program are working, but not bothering with finer details. (Such as build verification).
  • Smoke testing is normal health check up to a build of an application before taking it to testing in depth.
SANITY TESTING:
  • A sanity test is a narrow regression test that focuses on one or a few areas of functionality. Sanity testing is usually narrow and deep.
  • A sanity test is usually unscripted.
  • A Sanity test is used to determine a small section of the application is still working after a minor change.
  • Sanity testing is a cursory testing, it is performed whenever a cursory testing is sufficient to prove the application is functioning according to specifications. This level of testing is a subset of regression testing.
  • Sanity testing is to verify whether requirements are met or not, checking all features breadth-first.

Smoke Testing Vs Sanity Testing - Key Differences

Smoke Testing
Sanity Testing
Smoke Testing is performed to ascertain that the critical functionalities of the program is working fineSanity Testing is done to check the new functionality / bugs have been fixed
The objective of this testing is to verify the "stability" of the system in order to proceed with more rigorous testingThe objective of the testing is to verify the "rationality" of the system in order to proceed with more rigorous testing
This testing is performed by the developers or testersSanity testing is usually performed by testers
Smoke testing is usually documented or scriptedSanity testing is usually not documented and is unscripted
Smoke testing is a subset of Regression testingSanity testing is a subset of Acceptance testing
Smoke testing exercises the entire system from end to endSanity testing exercises only the particular component of the entire system
Smoke testing is like General Health Check UpSanity Testing is like specialized health check up

3) What’s the difference between priority and severity?
Answer:
Defect Severity
Defect Severity signifies degree of impact the defect has on the development or operation of a component application being tested. It is the extent to which the defectcan affect the software. The severity type is defined by the Software Tester based on the written test cases and functionality.
Defect Severity may range from Low to Critical
  • Critical - this defect is causing system failure. Nothing can proceed further. It may also be called as a show stopper
  • Major - highly severe defect, is causing the system to collapse, however few parts of the system are still usable, and/or there are a few workarounds for using the system in the collapsed state too
  • Medium - is causing some undesirable behavior, however system / feature is still usable to a high degree
  • Low - is more of a cosmetic issue. No serious impedance to system functionality is noted

Defect Priority
Defect priority signifies the level of urgency of fixing the bug. In other words Priority means how fast/ how soon it has to be fixed. Though priority may be initially set by the Software Tester, it is usually finalized by the Project/Product Manager.

Defect Priority may range from Low to Urgent
  • Urgent: Must to be fixed before any other high, medium or low defect should be fixed. Must be fixed in the next build.
  • High: Must be fixed in any of the upcoming builds but should be included in the release.
  • Medium: should take precedence over low priority defects and may be fixed after the release / in the next release.
  • Low: Fixing can be deferred until all other priority defects are fixed. It may or may not be fixed at all.
Differences between Defect Severity and Defect Priority

Severity
Priority
Severity is associated with standards/functionality.
Priority is associated with scheduling.
Severity refers to the seriousness of the bug on the functionality of the product. Higher effect on the functionality will lead to assignment of higher severity to the bug.
Priority refers to how soon the bug should be fixed.
Generally, the Quality Assurance Engineer decides the severity level.
Priority to fix a bug is decided in consultation with the client/manager.




Examples;-

  1. Let us assume a scenario where “Login” button is labeled as “Logen”:
    The priority and severity for different situations may be expressed as:-
·         For GUI testing: it is high priority and low severity
·         For UI testing: it is high priority and high severity
·         For functional testing: it is low priority and low severity
·         For cosmetic testing: it is low priority and high severity


  1. Low Severity, Low Priority
Suppose an application (web) is made up of 20 pages. On one of the pages out of the 20 which is visited very infrequently, there is a sentence with a grammatical error. Now, even though it’s a mistake on this expensive website, users can understand its meaning without any difficulty. This bug may go unnoticed to the eyes of many and won't affect any functionality or the credibility of the company.


  1. Low Severity, High Priority
·         While developing a site for Pepsi, by mistake a logo sign of coke is embedded. This does not affect functionality in any way but has high priority to be fixed.
·         Any typo mistakes or glaring spelling mistakes on home page.

  1. High Severity, Low Priority
·         Incase application works perfectly for 50000 sessions but beings to crash after higher number of sessions. This problem needs to be fixed but not immediately.
·         Any report generation not getting completed 100% - Means missing Title, Title Columns but having proper data enlisted. We could have this fixed in the next build but missing report columns is a High Severity defect.

  1. High Severity, High Priority
·         Now assume a windows-based application, a word-processor let’s say. As you open any file to be viewed it in, it crashes. Now, you can only create new files but as you open them, the word-processor crashes. This completely eliminates the usability of that word-processor as you can’t come back and edit your work on it, and also affects one of the major functionalities of the application. Thus, it’s a severe bug and should be fixed immediately.
·         Let’s say, as soon as the user clicks login button on Gmail site, some junk data is displayed on a blank page. Users can access the gmail.com website, but are not able to login successfully and no relevant error message is displayed. This is a severe bug and needs topmost priority.


4)What is Security testing in software testing?

  • It is a type of non-functional testing.
  • Security testing is basically a type ofsoftware testing that’s done to check whether the application or the product is secured or not. It checks to see if the application is vulnerable to attacks, if anyone hack the system or login to the application without any authorization.
  • It is a process to determine that an information system protects data and maintains functionality as intended.
  • The security testing is performed to check whether there is any information leakage in the sense by encrypting the application or using wide range of software’s and hardware’s and firewall etc.
  • Software security is about making software behave in the presence of a malicious attack.
  • The six basic security concepts that need to be covered by security testing are: confidentiality, integrity, authentication, availability, authorization and non-repudiation.
  • It is a type of non-functional testing.
  • Security testing is basically a type ofsoftware testing that’s done to check whether the application or the product is secured or not. It checks to see if the application is vulnerable to attacks, if anyone hack the system or login to the application without any authorization.
  • It is a process to determine that an information system protects data and maintains functionality as intended.
  • The security testing is performed to check whether there is any information leakage in the sense by encrypting the application or using wide range of software’s and hardware’s and firewall etc.
  • Software security is about making software behave in the presence of a malicious attack.
  • The six basic security concepts that need to be covered by security testing are: confidentiality, integrity, authentication, availability, authorization and non-repudiation.

5)What is Compatibility testing?

  • It is a type of non-functional testing.
  • Compatibility testing is a type ofsoftware testing used to ensure compatibility of the system/application/website built with various other objects such as other web browsers, hardware platforms, users (in case if it’s very specific type of requirement, such as a user who speaks and can read only a particular language), operating systems etc. This type of testing helps find out how well a system performs in a particular environment that includes hardware, network, operating system and other software etc.
  • It is basically the testing of the application or the product built with the computing environment.
  • It tests whether the application or the software product built is compatible with the hardware, operating system, database or other system software or not.
6)What is a Defect Life Cycle or a Bug lifecycle?

Defect life cycle is a cycle which a defect goes through during its lifetime. It starts when defect is found and ends when a defect is closed, after ensuring it’s not reproduced. Defect life cycle is related to the bug found during testing.
The bug has different states in the Life Cycle. The Life cycle of the bug can be shown diagrammatically as follows:
Bug or defect life cycle includes following steps or status:
  1. New: When a defect is logged and posted for the first time. It’s state is given as new.
  2. Assigned: After the tester has posted the bug, the lead of the tester approves that the bug is genuine and he assigns the bug to corresponding developer and the developer team. It’s state given as assigned.
  3. Open:  At  this state the developer has started analyzing and working on the defect fix.
  4. Fixed:  When developer makes necessary code changes and verifies the changes then he/she can make bug status as ‘Fixed’ and the bug is passed to testing team.
  5. Pending retest:  After fixing the defect the developer has given that particular code for retesting to the tester. Here the testing is pending on the testers end. Hence its status is pending retest.
  6. Retest:  At this stage the tester do the retesting of the changed code which developer has given to him to check whether the defect got fixed or not.
  7. Verified:  The tester tests the bug again after it got fixed by the developer. If the bug is not present in the software, he approves that the bug is fixed and changes the status to “verified”.
  8. Reopen:  If the bug still exists even after the bug is fixed by the developer, the tester changes the status to “reopened”. The bug goes through the life cycle once again.
  9. Closed:  Once the bug is fixed, it is tested by the tester. If the tester feels that the bug no longer exists in the software, he changes the status of the bug to “closed”. This state means that the bug is fixed, tested and approved.
  10. Duplicate: If the bug is repeated twice or the two bugs mention the same concept of the bug, then one bug status is changed to “duplicate“.
  11. Rejected: If the developer feels that the bug is not genuine, he rejects the bug. Then the state of the bug is changed to “rejected”.
  12. Deferred: The bug, changed to deferred state means the bug is expected to be fixed in next releases. The reasons for changing the bug to this state have many factors. Some of them are priority of the bug may be low, lack of time for the release or the bug may not have major effect on the software. 
  13. Not a bug:  The state given as “Not a bug” if there is no change in the functionality of the application. For an example: If customer asks for some change in the look and field of the application like change of colour of some text then it is not a bug but just some change in the looks of the  application.

7)What is Non-functional testing (Testing of software product characteristics)?

In non-functional testing the quality characteristics of the component or system is tested. Non-functional refers to aspects of the software that may not be related to a specific function or user action such as scalability or security. Eg. How many people can log in at once? Non-functional testing is also performed at all levels like functional testing.
Non-functional testing includes:
  • Functionality testing
  • Reliability testing
  • Usability testing
  • Efficiency testing
  • Maintainability testing
  • Portability testing
  • Baseline testing
  • Compliance testing
  • Documentation testing
  • Endurance testing
  • Load testing
  • Performance testing
  • Compatibility testing
  • Security testing
  • Scalability testing
  • Volume testing
  • Stress testing
  • Recovery testing
  • Internationalization testing and Localization testing
    • Functionality testing: Functionality testing is performed to verify that a software application performs and functions correctly according to design specifications. During functionality testing we check the core application functions, text input, menu functions and installation and setup on localized machines, etc.
    • Reliability testing: Reliability Testing is about exercising an application so that failures are discovered and removed before the system is deployed. The purpose of reliability testing is to determine product reliability, and to determine whether the software meets the customer’s reliability requirements.
    • Usability testing: In usability testing basically the testers tests the ease with which the user interfaces can be used. It tests that whether the application or the product built is user-friendly or not.
 Usability testing includes the following five components:
      1. Learnability: How easy is it for users to accomplish basic tasks the first time they encounter the design?
      2. Efficiency: How fast can experienced users accomplish tasks?
      3. Memorability: When users return to the design after a period of not using it, does the user remember enough to use it effectively the next time, or does the user have to start over again learning everything?
      4. Errors: How many errors do users make, how severe are these errors and how easily can they recover from the errors?
      5. Satisfaction: How much does the user like using the system?
    • Efficiency testing: Efficiency testing test the amount of code and testing resources required by a program to perform a particular function. Software Test Efficiency is number of test cases executed divided by unit of time (generally per hour).
    • Maintainability testing: It basically defines that how easy it is to maintain the system. This means that how easy it is to analyze, change and test the application or product.
    • Portability testing: It refers to the process of testing the ease with which a computer software component or application can be moved from one environment to another, e.g. moving of any application from Windows 2000 to Windows XP. This is usually measured in terms of the maximum amount of effort permitted. Results are measured in terms of the time required to move the software and complete the and documentation updates.
    • Baseline testing: It refers to the validation of documents and specifications on which test cases would be designed. The requirement specification validation is baseline testing. 
    • Compliance testing: It is related with the IT standards followed by the company and it is the testing done to find the deviations from the company prescribed standards.
    • Documentation testing: As per the IEEE Documentation describing plans for, or results of, the testing of a system or component, Types include test case specification, test incident report, test log, test plan, test procedure, test report. Hence the testing of all the above mentioned documents is known as documentation testing.
    • Endurance testing: Endurance testing involves testing a system with a significant load extended over a significant period of time, to discover how the system behaves under sustained use. For example, in software testing, a system may behave exactly as expected when tested for 1 hour but when the same system is tested for 3 hours, problems such as memory leaks cause the system to fail or behave randomly.
    • Load testing: A load test is usually conducted to understand the behavior of the application under a specific expected load. Load testing is performed to determine a system’s behavior under both normal and at peak conditions. It helps to identify the maximum operating capacity of an application as well as any bottlenecks and determine which element is causing degradation. E.g. If the number of users are in creased then how much CPU, memory will be consumed, what is the network and bandwidth response time
    • Performance testing: Performance testing is testing that is performed, to determine how fast some aspect of a system performs under a particular workload. It can serve different purposes like it can demonstrate that the system meets performance criteria. It can compare two systems to find which performs better. Or it can measure what part of the system or workload causes the system to perform badly.
    • Compatibility testing: Compatibility testing is basically the testing of the application or the product built with the computing environment. It tests whether the application or the software product built is compatible with the hardware, operating system, database or other system software or not.
    • Security testing: Security testing is basically to check that whether the application or the product is secured or not. Can anyone came tomorrow and hack the system or login the application without any authorization. It is a process to determine that an information system protects data and maintains functionality as intended.
    • Scalability testing: It is the testing of a software application for measuring its capability to scale up in terms of any of its non-functional capability like load supported, the number of transactions, the data volume etc.
    • Volume testing: Volume testing refers to testing a software application or the product with a certain amount of data. E.g., if we want to volume test our application with a specific database size, we need to expand our database to that size and then test the application’s performance on it.
    • Stress testing: It involves testing beyond normal operational capacity, often to a breaking point, in order to observe the results. It is a form of testing that is used to determine the stability of a given system. It  put  greater emphasis on robustness, availability, and error handling under a heavy load, rather than on what would be considered correct behavior under normal circumstances. The goals of such tests may be to ensure the software does not crash in conditions of insufficient computational resources (such as memory or disk space).
    • Recovery testing: Recovery testing is done in order to check how fast and better the application can recover after it has gone through any type of crash or hardware failure etc. Recovery testing is the forced failure of the software in a variety of ways to verify that recovery is properly performed. For example, when an application is receiving data from a network, unplug the connecting cable. After some time, plug the cable back in and analyze the application’s ability to continue receiving data from the point at which the network connection got disappeared. Restart the system while a browser has a definite number of sessions and check whether the browser is able to recover all of them or not.
    • Internationalization testing and Localization testing: Internationalization is a process of designing a software application so that it can be adapted to various languages and regions without any changes. Whereas Localization is a process of adapting internationalized software for a specific region or language by adding local specific components and translating text.

8)What is Functional testing (Testing of functions) in software?


In functional testing basically the testing of the functions of component or system is done. It refers to activities that verify a specific action or function of the code. Functional test tends to answer the questions like “can the user do this” or “does this particular feature work”. This is typically described in a requirements specification or in a functional specification.
The techniques used for functional testing are often specification-based. Testing functionality can be done from two perspective:
  • Requirement-based testing: In this type of testing the requirements are prioritized depending on the risk criteria and accordingly the tests are prioritized. This will ensure that the most important and most critical tests are included in the testing effort.
  • Business-process-based testing: In this type of testing the scenarios involved in the day-to-day business use of the system are described. It uses the knowledge of the business processes. For example, a personal and payroll system may have the business process along the lines of: someone joins the company, employee is paid on the regular basis and employee finally leaves the company.

9)What is Unit testing?

  • Unit testing is a method by which individual units of source code are tested to determine if they are fit for use. A unit is the smallest testable part of an application like functions/procedures, classes, interfaces.
  • Unit tests are typically written and run by software developers to ensure that code meets its design and behaves as intended.
  • The goal of unit testing is to isolate each part of the program and show that the individual parts are correct.
  • A unit test provides a strict, written contract that the piece of code must satisfy. As a result, it affords several benefits. Unit tests find problems early in the development cycle.

10)what is Integration Testing?

DEFINITION
Integration Testing is a level of the software testing process where individual units are combined and tested as a group.
integration testing image
The purpose of this level of testing is to expose faults in the interaction between integrated units.
Test drivers and test stubs are used to assist in Integration Testing.
Note: The definition of a unit is debatable and it could mean any of the following:
  1. the smallest testable part of a software
  2. a ‘module’ which could consist of many of  ’1′
  3. a ‘component’ which could consist of many of ’2′
ANALOGY
During the process of manufacturing a ballpoint pen, the cap, the body, the tail and clip, the ink cartridge and the ballpoint are produced separately and unit tested separately. When two or more units are ready, they are assembled and Integration Testing is performed. For example, whether the cap fits into the body or not.
METHOD
Any of Black Box TestingWhite Box Testing, and Gray Box Testing methods can be used. Normally, the method depends on your definition of ‘unit’.
TASKS
  • Integration Test Plan
    • Prepare
    • Review
    • Rework
    • Baseline
  • Integration Test Cases/Scripts
    • Prepare
    • Review
    • Rework
    • Baseline
  • Integration Test
    • Perform
When is Integration Testing performed?
Integration Testing is performed after Unit Testing and before System Testing.
Who performs Integration Testing?
Either Developers themselves or independent Testers perform Integration Testing.
APPROACHES
  • Big Bang is an approach to Integration Testing where all or most of the units are combined together and tested at one go. This approach is taken when the testing team receives the entire software in a bundle. So what is the difference between Big Bang Integration Testing and System Testing? Well, the former tests only the interactions between the units while the latter tests the entire system.
  • Top Down is an approach to Integration Testing where top level units are tested first and lower level units are tested step by step after that. This approach is taken when top down development approach is followed. Test Stubs are needed to simulate lower level units which may not be available during the initial phases.
  • Bottom Up is an approach to Integration Testing where bottom level units are tested first and upper level units step by step after that. This approach is taken when bottom up development approach is followed. Test Drivers are needed to simulate higher level units which may not be available during the initial phases.
  • Sandwich/Hybrid is an approach to Integration Testing which is a combination of Top Down and Bottom Up approaches.
TIPS
  • Ensure that you have a proper Detail Design document where interactions between each unit are clearly defined. In fact, you will not be able to perform Integration Testing without this information.
  • Ensure that you have a robust Software Configuration Management system in place. Or else, you will have a tough time tracking the right version of each unit, especially if the number of units to be integrated is huge.
  • Make sure that each unit is first unit tested before you start Integration Testing.
  • As far as possible, automate your tests, especially when you use the Top Down or Bottom Up approach, since regression testing is important each time you integrate a unit, and manual regression testing can be inefficient.

11)Why is testing necessary?

Testing is necessary because we all make mistakes. Some of those mistakes are unimportant, but some of them are expensive or dangerous. We need to check  everything and anything we produce because things can always go wrong – humans make mistakes all the time
Since we assume that our work may have mistakes, hence we all need to check our own work. However some mistakes come from bad assumptions and blind spots, so we might make the same mistakes when we check our own work as we made when we did it. So we may not notice the flaws in what we have done.
 Ideally, we should get someone else to check our work because another person is more likely to spot the flaws.
12)How do you know when to stop testing?
This can be difficult to determine. Many modern software applications are so complex and run in such an interdependent environment, that complete testing can never be done. Common factors in deciding when to stop are...
* Deadlines, e.g. release deadlines, testing deadlines;
* Test cases completed with certain percentage passed;
* Test budget has been depleted;
* Coverage of code, functionality, or requirements reaches a specified point;
* Bug rate falls below a certain level; or
* Beta or alpha testing period ends.
                                               (OR)
Testing should be stopped when it meets the completion criteria. Now how to find the completion criteria? Completion criteria can be derived from test plan and test strategy document. Also, re-check your test coverage.
Completion criteria should be based on Risks. Testing should be stopped when - 
  • Test cases completed with certain percentage passed and test coverage is achieved.
  • There are no known critical bugs
  • Coverage of code, functionality, or requirements reaches a specified point;
  • Bug rate falls below a certain level, now testers are not getting any priority 1, 2, or 3 bugs.
As testing is a never ending process we can never assume that 100 % testing has been done, we can only minimize the risk of shipping the product to client with X testing done. The risk can be measured by Risk analysis but for small duration / low budget / low resources project, risk can be deduced by simply: - 
  • Measuring Test Coverage.
  • Number of test cycles.
  • Number of high priority bugs.

No comments:

Post a Comment