Sunday 20 September 2015

Different Types of Test Case Design Techniques


Test case design methods commonly used are:


  • Equivalence Partitioning (EP)
  • Boundary value analysis (BVA)
  • Negative Testing

1. Equivalence Partitioning: Equivalence partitioning is a software testing related technique with the goal:

  •  To Reduce the Number of Test Cases to a Necessary Minimum.
  •  To Select the Right Test Cases to cover all Possible Scenarios.

The Equivalence Partitions are usually derived from the specification of the component. An input has certain ranges which are valid and other ranges which are invalid.
This may be best explained by the example of a function which takes a parameter "month". The valid range for the month is 1 to 12, representing January to December. This valid range is called a partition. In this example there are two further partitions of invalid ranges. The first invalid partition would be <= 0 and the second invalid partition would be >= 13
... -2 -1 0               1 ........................................12              13 14 15..... -----------------------------------Invalid partition 1 valid partition                                         Invalid partition 2

The Testing theory related to equivalence partitioning says that only one test case of each partition is needed to evaluate the behavior of the program for the related partition. In other words it is sufficient to select one test case out of each partition to check the behavior of the program. To use more or even all test cases of a partition will not find new faults in the program. The values within one partition are considered to be "equivalent". Thus the number of test cases can be reduced considerably.
An additional effect by applying this technique is that you also find the so called "dirty" test cases. An inexperienced tester may be tempted to use as test cases the input data 1 to 12 for the month and forget to select some out of the invalid partitions. This would lead to a huge number of unnecessary test cases.
Equivalence Partitioning is No Stand Alone method to determine Test Cases. It has to be supplemented by Boundary Value Analysis (BVA). Having determined the partitions of possible inputs the method of boundary value analysis has to be applied to select the most effective test cases out of these partitions.
Equivalence Partitioning: A Black Box test design technique in which test cases are designed to execute representatives from equivalence partitions. In principle, test cases are designed to cover each partition at least once.
Equivalence partitioning is a software testing technique to minimize number of permutation and combination of input data. In equivalence partitioning, data is selected in such a way that it gives as many different output as possible with the minimal set of data.
For example of EP: consider a very simple function for awarding grades to the students. This program follows this guideline to award grades
Marks 00 - 39 ------------ Grade D
Marks 40 - 59 ------------ Grade C
Marks 60 - 70 ------------ Grade B
Marks 71 - 100 ------------ Grade A

Based on the equivalence partitioning techniques, partitions for this program could be as follows
Marks between 0 to 39 - Valid Input
Marks between 40 to 59 - Valid Input
Marks between 60 to 70 - Valid Input
Marks between 71 to 100 - Valid Input
Marks less than 0 - Invalid Input
Marks more than 100 - Invalid Input
Non numeric input - Invalid Input
From the example above, it is clear that from infinite possible test cases (Any value between 0 to 100, Infinite values for >100 , < 0 and non numeric) data can be divided into seven distinct classes. Now even if you take only one data value from these partitions, your coverage will be good.
Most important part in equivalence partitioning is to identify equivalence classes. Identifying equivalence classes needs close examination of the possible input values. Also, you can not rely on any one technique to ensure your testing is complete. You still need to apply other techniques to find defects.

2. Boundary Value Analysis (BVA): Boundary Value Analysis is a Software Test Case Design Technique used to determine test cases covering off-by-one errors.
Testing experience has shown that the boundaries of input ranges to a software component are likely to contain defects.
Example: A function that takes an integer between 1 and 12, representing a month between January to December, might contain a check for this range:

void exampleFunction(int month) { if (month > 0 && month < 13)
....
A common programming error is to check an incorrect range e.g. starting the range at 0 by writing:
void exampleFunction(int month) { if (month >= 0 && month < 13)
....
For more complex range checks in a program this may be a problem which is not so easily spotted as in the above simple example.
Applying Boundary Value Analysis (BVA):
To set up boundary value analysis test cases, the tester first determines which boundaries are at the interface of a software component. This is done by applying the equivalence partitioning technique. For the above example, the month parameter would have the following partitions:
... -2 -1 0            1................................12 13 14 15.....
------------------------|------------------------------|-------------------------
Invalid partition 1           valid partition             Invalid partition 2
To apply boundary value analysis, a test case at each side of the boundary between two partitions is selected. In the above example this would be 0 and 1 for the lower boundary as well as 12 and 13 for the upper boundary. Each of these pairs consists of a "clean" and a "negative" test case. A "clean" test case should lead to a valid result. A "negative" test case should lead to specified error handling such as the limiting of values, the usage of a substitute value, or a warning.
Boundary value analysis can result in three test cases for each boundary; for example if n is a boundary, test cases could include n-1, n, and n+1.

3. Negative Testing:
“Non Recommended Test Data generates the Negative Test Cases.”
Example:

  •  Entering future date in ‘Employee birth date’ field.
  •  Entering alphabets in numeric field like salary.
  •  Entering number in name field.

What is Test Scenario?

Test scenario is a combination of test cases which defines what is to be tested on an application/feature, or simply test scenarios are the series of test cases. Suppose you are testing a login form. Then, the test scenario would simply be a single sentence i.e "Ensure the working of login form". Test scenarios mainly focus on the functionality and not the input data.This document includes test conditions, where as test cases are the step by step procedure to be followed to meet this condition.
Test cases should be written after considering all the requirements. By doing so, the testing process will become simpler yet effective.

What is Test Case?


Test Case is a description of what is to be tested what data to be used and what action to be done to check the actual result against the expected result.
Test Case is simply a test with formal steps and instructions.
Types of Test Case:
1. Functional Test Case

  • Smoke Test Case
  • Component Test Case
  • Integration Test Case
  • System Test Case
  • Usability Test Case

3. Negative Test Case (Non recommended Test Data.)
4. Performance Test Case

Friday 18 September 2015

Types of Performance Testing



Load Testing:

Load testing is a part of a more general process known as performance testing.
OR
It tests system work under loading. This type of testing is very important for client-server systems including Web application (e-Communities, e-Auctions, etc.), ERP, CRM and other business systems with numerous concurrent users.
Examples of load testing include:

  •  Downloading a series of large files from the Internet.
  •  Running multiple applications on a computer or server simultaneously.
  •  Assigning many jobs to a printer in a queue.
  •  Subjecting a server to a large amount of e-mail traffic.
  • Writing and reading data to and from a hard disk continuously.


Stress Testing:

Stress testing examines system behavior in unusual ("stress", or beyond the bounds of normal circumstances) situations. E.g., a system behavior under heavy loading, system crash, and lack of memory or hard disk space can be considered as a stress situation. Fool-proof testing is another case which is useful for GUI systems, especially if they are oriented at a wide circle of users.


Volume Testing :

Volume test shall check if there are any problems when running the system under test with realistic amounts of data, or even maximum or more.
Volume test helps to find problems with max amounts of data. System performance or usability often degrades when large amounts of data must be searched, ordered etc.
Test Procedure:

  • The system is run with maximum amounts of data.
  • Tables, databases, files, disks etc. are loaded with a maximum of data.
  • Important functions where data volume may lead to trouble.

Difference between Regression & Retesting


Regression Testing

Retesting

Regression testing is a type of software testing that intends to ensure that changes like defect fixes or enhancements to the module or application have not affecting unchanged part.Retesting is done to make sure that the tests cases which failed in last execution are passing after the defects against those failures are fixed.
Regression testing is not carried out on specific defect fixes. It is planned as specific area or full regression testing.Retesting is carried out based on the defect fixes.
In Regression testing, you can include the test cases which passed earlier. We can say that check the functionality which was working earlier.In Retesting, you can include the test cases which failed earlier. We can say that check the functionality which was failed in earlier build.
Regression test cases we use are derived from the functional specification, the user manuals, user tutorials, and defect reports in relation to corrected problems.Test cases for Retesting cannot be prepared before start testing. In Retesting only re-execute the test cases failed in the prior execution.
Automation is the key for regression testing.
Manual regression testing tends to get more expensive with each new release.
Regression testing is right time to start automating test cases.
You cannot automate the test cases for Retesting.
Defect verification is not comes under Regression testing.Defect verification is comes under Retesting.
Based on the availability of resources the Regression testing can be carried out parallel with Retesting.Priority of Retesting over Regression testing is higher, so it is carried out before regression testing.


Difference between Smoke & Sanity Software Testing


  • Smoke testing is a wide approach where all areas of the software application are tested without getting into too deep. However, a sanity software testing is a narrow regression testing with a focus on one or a small set of areas of functionality of the software application.
  • The test cases for smoke testing of the software can be either manual or automated. However, a sanity test is generally without test scripts or test cases.
  • Smoke testing is done to ensure whether the main functions of the software application are working or not. During smoke testing of the software, we do not go into finer details. However, sanity testing is a cursory software testing type. It is done whenever a quick round of software testing can prove that the software application is functioning according to business / functional requirements.
  • Smoke testing of the software application is done to check whether the build can be accepted for through software testing. Sanity testing of the software is to ensure whether the requirements are met or not.

Types of Functional Testing

1. Build Verification Testing (BVT)
2. Smoke Testing
3. Sanity Testing
4. Component Testing
5. Integration Testing
6. System Testing
7. System Integration Testing
8. User Acceptance Testing (UAT)
9. Alpha Testing
10. Beta Testing
11. Re-Testing
12. Regression Testing

Build Verification Testing (BVT) / Smoke Testing


Build Verification test is a set of tests run on every new build to verify that build is testable before it is released to test team for further testing. These test cases are core functionality test cases that ensure application is stable and can be tested thoroughly. Typically BVT process is automated. If BVT fails that build is again get assigned to developer for fix.
BVT is also called smoke testing or build acceptance testing (BAT)
Sanity Testing
A sanity test is a narrow regression test that focuses on one or a few areas of functionality. Sanity testing is usually narrow and deep. Sanity test is used to determine a small section of the application is still working after a minor change.
Once a new build is obtained with minor revisions, instead of doing a through regression, sanity is performed so as to ascertain the build has indeed rectified the issues and no further issue has been introduced by the fixes.
Component / Module Testing:
The testing of individual Software Components/Modules.E.g.: If application has 3 modules ADD/EDIT/DELETE then testing of each module individually is called as Component Testing.
Integration Testing:
Testing of combined modules of an application to determine whether they are functionally working correctly. The ‘parts’ can be code modules, individual applications, client and server applications on a network, etc. This type of testing is especially relevant to client/server and distributed systems.
Types of Integration Testing:
1. Big Bang: In this approach, all or most of the developed modules are coupled together to forms a complete software system or major part of the system and then used for integration testing. The Big Bang method is very effective for saving time in the integration testing process. However, if the test cases and their results are not recorded properly, the entire integration process will be more complicated and may prevent the testing team from achieving the goal of integration testing.
2. Bottom up Testing: This is an approach to integrated testing where the lowest level components are tested first, then used to facilitate the testing of higher level components. The process is repeated until the component at the top of the hierarchy is tested.
All the bottom or low-level modules, procedures or functions are integrated and then tested. After the integration testing of lower level integrated modules, the next level of modules will be formed and can be used for integration testing. This approach is helpful only when all or most of the modules of the same development level are ready. This method also helps to determine the levels of software developed and makes it easier to report testing progress in the form of a percentage.
3. Top down Testing: This is an approach to integrated testing where the top integrated modules are tested and the branch of the module is tested step by step until the end of the related module.
4. Sandwich Testing: This is an approach to combine top down testing with bottom up testing.
The main advantage of the Bottom-Up approach is that bugs are more easily found. With Top-Down, it is easier to find a missing branch link.
System Testing:
System Testing tends to affirm the end-to-end quality of the entire system. It is a process of performing a variety of tests on a system to explore functionality or to identify problems. System Testing is a level of the software testing process where a complete, integrated system/software is tested. The purpose of this test is to evaluate the system’s compliance with the specified requirements. Non-functional quality attributes, such as reliability, security and compatibility are also checked in system testing.
Example - During the process of manufacturing a ballpoint pen, the cap, the body, the tail, the ink cartridge and the ballpoint are produced separately and unit tested separately. When two or more units are ready, they are assembled and Integration Testing is performed. When the complete pen is integrated, System Testing is performed
System Integration Testing:
System Integration testing verifies that a system is integrated to any external or third party systems defined in the system requirements.
User Acceptance Testing (UAT):
Final testing based on specifications of the end-user or customer, or based on use by end-users/customers over some limited period of time. UAT is a process to obtain confirmation that a system meets mutually agreed-upon requirements.
User Acceptance Testing (UAT) is a process to obtain confirmation that a system meets mutually agreed-upon requirements
Alpha Testing:
Alpha Testing is done by users/customers or an independent Test Team at the developers' site.
OR
Alpha Testing: Testing a new product in pre-release internally before testing it with outside users.
Beta Testing
The testing conducted by end user at client place.
OR
In this type of testing, the software is distributed as a beta version to the users and users test the application at their sites. As the users explore the software, in case if any exception/defect occurs that is reported to the developers.
Re-Testing:
In Re-Testing we test only particular functionality (which was failed during testing) that is working properly or not after change is made. In this we did not test all functionality.
Regression Testing:
The intent of regression testing is to provide a general assurance that no additional errors were introduced in the process of fixing other defects.
OR
Regression Testing means to test the entire application to ensure that the fixing of bug will be affecting anywhere else in the application.