Sunday 20 September 2015

Different Types of Test Case Design Techniques


Test case design methods commonly used are:


  • Equivalence Partitioning (EP)
  • Boundary value analysis (BVA)
  • Negative Testing

1. Equivalence Partitioning: Equivalence partitioning is a software testing related technique with the goal:

  •  To Reduce the Number of Test Cases to a Necessary Minimum.
  •  To Select the Right Test Cases to cover all Possible Scenarios.

The Equivalence Partitions are usually derived from the specification of the component. An input has certain ranges which are valid and other ranges which are invalid.
This may be best explained by the example of a function which takes a parameter "month". The valid range for the month is 1 to 12, representing January to December. This valid range is called a partition. In this example there are two further partitions of invalid ranges. The first invalid partition would be <= 0 and the second invalid partition would be >= 13
... -2 -1 0               1 ........................................12              13 14 15..... -----------------------------------Invalid partition 1 valid partition                                         Invalid partition 2

The Testing theory related to equivalence partitioning says that only one test case of each partition is needed to evaluate the behavior of the program for the related partition. In other words it is sufficient to select one test case out of each partition to check the behavior of the program. To use more or even all test cases of a partition will not find new faults in the program. The values within one partition are considered to be "equivalent". Thus the number of test cases can be reduced considerably.
An additional effect by applying this technique is that you also find the so called "dirty" test cases. An inexperienced tester may be tempted to use as test cases the input data 1 to 12 for the month and forget to select some out of the invalid partitions. This would lead to a huge number of unnecessary test cases.
Equivalence Partitioning is No Stand Alone method to determine Test Cases. It has to be supplemented by Boundary Value Analysis (BVA). Having determined the partitions of possible inputs the method of boundary value analysis has to be applied to select the most effective test cases out of these partitions.
Equivalence Partitioning: A Black Box test design technique in which test cases are designed to execute representatives from equivalence partitions. In principle, test cases are designed to cover each partition at least once.
Equivalence partitioning is a software testing technique to minimize number of permutation and combination of input data. In equivalence partitioning, data is selected in such a way that it gives as many different output as possible with the minimal set of data.
For example of EP: consider a very simple function for awarding grades to the students. This program follows this guideline to award grades
Marks 00 - 39 ------------ Grade D
Marks 40 - 59 ------------ Grade C
Marks 60 - 70 ------------ Grade B
Marks 71 - 100 ------------ Grade A

Based on the equivalence partitioning techniques, partitions for this program could be as follows
Marks between 0 to 39 - Valid Input
Marks between 40 to 59 - Valid Input
Marks between 60 to 70 - Valid Input
Marks between 71 to 100 - Valid Input
Marks less than 0 - Invalid Input
Marks more than 100 - Invalid Input
Non numeric input - Invalid Input
From the example above, it is clear that from infinite possible test cases (Any value between 0 to 100, Infinite values for >100 , < 0 and non numeric) data can be divided into seven distinct classes. Now even if you take only one data value from these partitions, your coverage will be good.
Most important part in equivalence partitioning is to identify equivalence classes. Identifying equivalence classes needs close examination of the possible input values. Also, you can not rely on any one technique to ensure your testing is complete. You still need to apply other techniques to find defects.

2. Boundary Value Analysis (BVA): Boundary Value Analysis is a Software Test Case Design Technique used to determine test cases covering off-by-one errors.
Testing experience has shown that the boundaries of input ranges to a software component are likely to contain defects.
Example: A function that takes an integer between 1 and 12, representing a month between January to December, might contain a check for this range:

void exampleFunction(int month) { if (month > 0 && month < 13)
....
A common programming error is to check an incorrect range e.g. starting the range at 0 by writing:
void exampleFunction(int month) { if (month >= 0 && month < 13)
....
For more complex range checks in a program this may be a problem which is not so easily spotted as in the above simple example.
Applying Boundary Value Analysis (BVA):
To set up boundary value analysis test cases, the tester first determines which boundaries are at the interface of a software component. This is done by applying the equivalence partitioning technique. For the above example, the month parameter would have the following partitions:
... -2 -1 0            1................................12 13 14 15.....
------------------------|------------------------------|-------------------------
Invalid partition 1           valid partition             Invalid partition 2
To apply boundary value analysis, a test case at each side of the boundary between two partitions is selected. In the above example this would be 0 and 1 for the lower boundary as well as 12 and 13 for the upper boundary. Each of these pairs consists of a "clean" and a "negative" test case. A "clean" test case should lead to a valid result. A "negative" test case should lead to specified error handling such as the limiting of values, the usage of a substitute value, or a warning.
Boundary value analysis can result in three test cases for each boundary; for example if n is a boundary, test cases could include n-1, n, and n+1.

3. Negative Testing:
“Non Recommended Test Data generates the Negative Test Cases.”
Example:

  •  Entering future date in ‘Employee birth date’ field.
  •  Entering alphabets in numeric field like salary.
  •  Entering number in name field.

What is Test Scenario?

Test scenario is a combination of test cases which defines what is to be tested on an application/feature, or simply test scenarios are the series of test cases. Suppose you are testing a login form. Then, the test scenario would simply be a single sentence i.e "Ensure the working of login form". Test scenarios mainly focus on the functionality and not the input data.This document includes test conditions, where as test cases are the step by step procedure to be followed to meet this condition.
Test cases should be written after considering all the requirements. By doing so, the testing process will become simpler yet effective.

What is Test Case?


Test Case is a description of what is to be tested what data to be used and what action to be done to check the actual result against the expected result.
Test Case is simply a test with formal steps and instructions.
Types of Test Case:
1. Functional Test Case

  • Smoke Test Case
  • Component Test Case
  • Integration Test Case
  • System Test Case
  • Usability Test Case

3. Negative Test Case (Non recommended Test Data.)
4. Performance Test Case

Friday 18 September 2015

Types of Performance Testing



Load Testing:

Load testing is a part of a more general process known as performance testing.
OR
It tests system work under loading. This type of testing is very important for client-server systems including Web application (e-Communities, e-Auctions, etc.), ERP, CRM and other business systems with numerous concurrent users.
Examples of load testing include:

  •  Downloading a series of large files from the Internet.
  •  Running multiple applications on a computer or server simultaneously.
  •  Assigning many jobs to a printer in a queue.
  •  Subjecting a server to a large amount of e-mail traffic.
  • Writing and reading data to and from a hard disk continuously.


Stress Testing:

Stress testing examines system behavior in unusual ("stress", or beyond the bounds of normal circumstances) situations. E.g., a system behavior under heavy loading, system crash, and lack of memory or hard disk space can be considered as a stress situation. Fool-proof testing is another case which is useful for GUI systems, especially if they are oriented at a wide circle of users.


Volume Testing :

Volume test shall check if there are any problems when running the system under test with realistic amounts of data, or even maximum or more.
Volume test helps to find problems with max amounts of data. System performance or usability often degrades when large amounts of data must be searched, ordered etc.
Test Procedure:

  • The system is run with maximum amounts of data.
  • Tables, databases, files, disks etc. are loaded with a maximum of data.
  • Important functions where data volume may lead to trouble.

Difference between Regression & Retesting


Regression Testing

Retesting

Regression testing is a type of software testing that intends to ensure that changes like defect fixes or enhancements to the module or application have not affecting unchanged part.Retesting is done to make sure that the tests cases which failed in last execution are passing after the defects against those failures are fixed.
Regression testing is not carried out on specific defect fixes. It is planned as specific area or full regression testing.Retesting is carried out based on the defect fixes.
In Regression testing, you can include the test cases which passed earlier. We can say that check the functionality which was working earlier.In Retesting, you can include the test cases which failed earlier. We can say that check the functionality which was failed in earlier build.
Regression test cases we use are derived from the functional specification, the user manuals, user tutorials, and defect reports in relation to corrected problems.Test cases for Retesting cannot be prepared before start testing. In Retesting only re-execute the test cases failed in the prior execution.
Automation is the key for regression testing.
Manual regression testing tends to get more expensive with each new release.
Regression testing is right time to start automating test cases.
You cannot automate the test cases for Retesting.
Defect verification is not comes under Regression testing.Defect verification is comes under Retesting.
Based on the availability of resources the Regression testing can be carried out parallel with Retesting.Priority of Retesting over Regression testing is higher, so it is carried out before regression testing.


Difference between Smoke & Sanity Software Testing


  • Smoke testing is a wide approach where all areas of the software application are tested without getting into too deep. However, a sanity software testing is a narrow regression testing with a focus on one or a small set of areas of functionality of the software application.
  • The test cases for smoke testing of the software can be either manual or automated. However, a sanity test is generally without test scripts or test cases.
  • Smoke testing is done to ensure whether the main functions of the software application are working or not. During smoke testing of the software, we do not go into finer details. However, sanity testing is a cursory software testing type. It is done whenever a quick round of software testing can prove that the software application is functioning according to business / functional requirements.
  • Smoke testing of the software application is done to check whether the build can be accepted for through software testing. Sanity testing of the software is to ensure whether the requirements are met or not.

Types of Functional Testing

1. Build Verification Testing (BVT)
2. Smoke Testing
3. Sanity Testing
4. Component Testing
5. Integration Testing
6. System Testing
7. System Integration Testing
8. User Acceptance Testing (UAT)
9. Alpha Testing
10. Beta Testing
11. Re-Testing
12. Regression Testing

Build Verification Testing (BVT) / Smoke Testing


Build Verification test is a set of tests run on every new build to verify that build is testable before it is released to test team for further testing. These test cases are core functionality test cases that ensure application is stable and can be tested thoroughly. Typically BVT process is automated. If BVT fails that build is again get assigned to developer for fix.
BVT is also called smoke testing or build acceptance testing (BAT)
Sanity Testing
A sanity test is a narrow regression test that focuses on one or a few areas of functionality. Sanity testing is usually narrow and deep. Sanity test is used to determine a small section of the application is still working after a minor change.
Once a new build is obtained with minor revisions, instead of doing a through regression, sanity is performed so as to ascertain the build has indeed rectified the issues and no further issue has been introduced by the fixes.
Component / Module Testing:
The testing of individual Software Components/Modules.E.g.: If application has 3 modules ADD/EDIT/DELETE then testing of each module individually is called as Component Testing.
Integration Testing:
Testing of combined modules of an application to determine whether they are functionally working correctly. The ‘parts’ can be code modules, individual applications, client and server applications on a network, etc. This type of testing is especially relevant to client/server and distributed systems.
Types of Integration Testing:
1. Big Bang: In this approach, all or most of the developed modules are coupled together to forms a complete software system or major part of the system and then used for integration testing. The Big Bang method is very effective for saving time in the integration testing process. However, if the test cases and their results are not recorded properly, the entire integration process will be more complicated and may prevent the testing team from achieving the goal of integration testing.
2. Bottom up Testing: This is an approach to integrated testing where the lowest level components are tested first, then used to facilitate the testing of higher level components. The process is repeated until the component at the top of the hierarchy is tested.
All the bottom or low-level modules, procedures or functions are integrated and then tested. After the integration testing of lower level integrated modules, the next level of modules will be formed and can be used for integration testing. This approach is helpful only when all or most of the modules of the same development level are ready. This method also helps to determine the levels of software developed and makes it easier to report testing progress in the form of a percentage.
3. Top down Testing: This is an approach to integrated testing where the top integrated modules are tested and the branch of the module is tested step by step until the end of the related module.
4. Sandwich Testing: This is an approach to combine top down testing with bottom up testing.
The main advantage of the Bottom-Up approach is that bugs are more easily found. With Top-Down, it is easier to find a missing branch link.
System Testing:
System Testing tends to affirm the end-to-end quality of the entire system. It is a process of performing a variety of tests on a system to explore functionality or to identify problems. System Testing is a level of the software testing process where a complete, integrated system/software is tested. The purpose of this test is to evaluate the system’s compliance with the specified requirements. Non-functional quality attributes, such as reliability, security and compatibility are also checked in system testing.
Example - During the process of manufacturing a ballpoint pen, the cap, the body, the tail, the ink cartridge and the ballpoint are produced separately and unit tested separately. When two or more units are ready, they are assembled and Integration Testing is performed. When the complete pen is integrated, System Testing is performed
System Integration Testing:
System Integration testing verifies that a system is integrated to any external or third party systems defined in the system requirements.
User Acceptance Testing (UAT):
Final testing based on specifications of the end-user or customer, or based on use by end-users/customers over some limited period of time. UAT is a process to obtain confirmation that a system meets mutually agreed-upon requirements.
User Acceptance Testing (UAT) is a process to obtain confirmation that a system meets mutually agreed-upon requirements
Alpha Testing:
Alpha Testing is done by users/customers or an independent Test Team at the developers' site.
OR
Alpha Testing: Testing a new product in pre-release internally before testing it with outside users.
Beta Testing
The testing conducted by end user at client place.
OR
In this type of testing, the software is distributed as a beta version to the users and users test the application at their sites. As the users explore the software, in case if any exception/defect occurs that is reported to the developers.
Re-Testing:
In Re-Testing we test only particular functionality (which was failed during testing) that is working properly or not after change is made. In this we did not test all functionality.
Regression Testing:
The intent of regression testing is to provide a general assurance that no additional errors were introduced in the process of fixing other defects.
OR
Regression Testing means to test the entire application to ensure that the fixing of bug will be affecting anywhere else in the application.

Different Types of BlackBox Testing

BlackBox Testing Type:

1. Functional Testing
2. Performance Testing
3. Compatibility Testing
4. Usability Testing
5. Negative Testing
6. Ad-Hoc Testing
7. Exhaustive Testing

Functional Testing:
Whether the application/module is functioning according to the stated requirement.

Performance Testing:
Performance testing is the process of determining the Speed or effectiveness of a computer, Network, Software Program or Device. This process can involve quantitative tests done in a lab, such as measuring the response time or the number of MIPS (millions of instructions per second) at which a system functions. Qualitative attributes such as reliability, scalability and interoperability.
.
Compatibility Testing:
Compatibility testing is a type of testing used to ensure compatibility of the system/application/website built with various other objects such as other web browsers, hardware platforms, users (in case if it’s very specific type of requirement, such as a user who speaks and can read only a particular language), operating systems etc. This type of testing helps find out how well a
SQT & Quality Course Contents Page 8
system performs in a particular environment that includes hardware, network, operating system and other software etc.
Compatibility testing can be automated using automation tools or can be performed manually and is a part of non-functional software testing.

Usability Testing:

  • Checks whether the layout, text and the messages displayed are user friendly and meet the stated requirement.
  • Cursor is properly positioned, Navigation of cursor.
  • On-line list displayed in proper sort sequence.
  • Project screen standards are adhered to (i.e., colors, common field lengths, protected fields, error highlighting, cursor position, etc.)
  • “Usability Testing is needed to check if the user interface is easy to use and understand.”


Negative Testing:
Any testing carried out by passing Non-Recommended values with an aim of breaking down the application is called negative testing".
"If a developer designed an edit box to accept only numeric value up to length of 10 numbers and if we entered the alphabets and alphabets are accepted by the Text Box then it is called as negative testing"

Ad Hoc Testing:
Testing without a formal test plan or outside of a test plan. With some projects this type of testing is carried out as an adjunct to formal testing. If carried out by a skilled tester, it can often find problems that are not caught in regular testing. Sometimes, if testing occurs very late in the development cycle, this will be the only kind of testing that can be performed. Sometimes ad hoc testing is referred to as exploratory testing.

Exhaustive Testing:
Exhaustive testing is the testing where we execute single test case for multiple test data. Exhaustive testing means testing the functionality with all possible valid and invalid data.
Test to verify the behavior of every aspect of an application, including all permutations. We execute a program with all possible combinations of inputs or values for program variables. Generally we use Automation testing when single test case executed for multiple test data

BlackBox Vs WhiteBox Testing

Black box Testing:

An approach of testing where application/software is considered as a black box. Black Box Testing, also known as Behavioral Testing, is testing where tester only knows the inputs and what the expected outcomes should be and not how the program arrives at those outputs. The tester does not ever examine the programming code and does not need any further knowledge of the program other than its specifications

  • Specific knowledge of the application's code/internal structure and programming knowledge in general is not required.
  • Test cases are built around specifications and requirements, i.e., what the application is supposed to do.
  • Testing, either functional or non-functional, without reference to the internal structure of the component or system.

White Box Testing:
  • White Box Testing (also called as Clear Box Testing, Glass Box Testing and Transparent Box Testing or Structural Testing) is a method of testing software that tests internal structures or workings of an application.
  • An internal perspective of the system, as well as programming skills, are required and used to design test cases.
  • It is usually done at the unit level.
  • White Box Testing is verification technique software engineers can use to examine if their code works as expected.

Difference between Priority and Severity of a bug?

Priority: 
Priority defines the order in which developer should resolve a defect.This priority status is set by the tester while logging the defect through tool.


Priority can be of following types:

Priority 1 – Critical (P1): This has to be fixed immediately within 24 hours. This generally occurs in cases when an entire functionality is blocked and no testing can proceed as a result of this. Or in certain other cases if there are significant memory leaks, then generally the defect is classified as a priority -1 meaning the program/ feature is unusable in the current state.
 Priority 2 – High (P2): Once the critical defects have been fixed, a defect having this priority is the next candidate which has to be fixed for any test activity to match the “exit” criteria. Normally when a feature is not usable as it’s supposed to be, due to a program defect, or that a new code has to be written or sometimes even because some environmental problem has to be handled through the code, a defect may qualify for a priority 2.
Priority 3 – Medium (P3): A defect with this priority must be in contention to be fixed as it could also deal with functionality issues which is not as per expectation. Sometimes even cosmetic errors such as expecting the right error message during the failure could qualify to be a priority 3 defect.
Priority 4 – Low (P4): A defect with low priority indicates that there is definitely an issue, but it doesn’t have to be fixed to match the “exit” criteria. However this must be fixed before the GA is done. Typically, some typing errors or even cosmetic errors as discussed previously could be categorized in here. Sometimes defects with priority low are also opened to suggest some enhancements in the existing design or a request to implement a small feature to enhance user experience.


Severity:
It is the extent to which the defect can affect the software. In other words it defines the impact that a given defect has on the system.
Severity can be of following types:


  • Critical / Show Stopper (S1): A defect that completely hampers or blocks testing of the product/ feature is a critical defect. An example would be in case of UI testing where after going through a wizard, the UI just hangs at one pane or doesn’t go further to trigger the function. Or in some other cases, when the feature developed itself is missing from the build.
  • Major or Severe (S2): A major defect occurs when the functionality is functioning grossly away from the expectations or not doing what it should be doing. An example could be: Say that a VLAN needs to be deployed on the switch and you are using a UI template that triggers this function. When this template to configure VLAN fails on the switch, it gets classified as a severe functionality drawback.
  • Moderate/ Normal (S3): A moderate defect occurs when the product or application doesn’t meet certain criteria or still exhibits some unnatural behavior, however the functionality as a whole is not impacted. For example in the VLAN template deploy above, a moderate or normal defect would occur when the template is deployed successfully on the switch however there is no indication being sent to the user.
  • Low or Minor (S4): A minor bug occurs when there is almost no impact to the functionality, but is still a valid defect that should be corrected. Examples of this could include spelling mistakes in error messages printed to user or defects to enhance the look and feel of a feature.
Defect Priority and Severity
Examples:

  1. Let us assume a scenario where “Login” button is labeled as “Logen”:
    The priority and severity for different situations may be expressed as:-
·         For GUI testing: it is high priority and low severity
·         For UI testing: it is high priority and high severity
·         For functional testing: it is low priority and low severity
·         For cosmetic testing: it is low priority and high severity


  1. Low Severity, Low Priority
Suppose an application (web) is made up of 20 pages. On one of the pages out of the 20 which is visited very infrequently, there is a sentence with a grammatical error. Now, even though it’s a mistake on this expensive website, users can understand its meaning without any difficulty. This bug may go unnoticed to the eyes of many and won't affect any functionality or the credibility of the company.


  1. Low Severity, High Priority
·         While developing a site for Pepsi, by mistake a logo sign of coke is embedded. This does not affect functionality in any way but has high priority to be fixed.
·         Any typo mistakes or glaring spelling mistakes on home page.

  1. High Severity, Low Priority
·         In case application works perfectly for 50000 sessions but beings to crash after higher number of sessions. This problem needs to be fixed but not immediately.
·         Any report generation not getting completed 100% - Means missing Title, Title Columns but having proper data enlisted. We could have this fixed in the next build but missing report columns is a High Severity defect.

  1. High Severity, High Priority
·         Now assume a windows-based application, a word-processor let’s say. As you open any file to be viewed it in, it crashes. Now, you can only create new files but as you open them, the word-processor crashes. This completely eliminates the usability of that word-processor as you can’t come back and edit your work on it, and also affects one of the major functionalities of the application. Thus, it’s a severe bug and should be fixed immediately.
·         Let’s say, as soon as the user clicks login button on Gmail site, some junk data is displayed on a blank page. Users can access the gmail.com website, but are not able to login successfully and no relevant error message is displayed. This is a severe bug and needs topmost priority.

Why Testing is Required?

Testing is not limited to the detection of “bugs” in the software, but also increases confidence in its proper functioning and assists with the evaluation of functional and nonfunctional properties. Testing is an important and critical part of the software development process, on which the quality and reliability of the delivered product strictly depend.

What is Software Testing?

Software Testing is an activity to ensure the correctness, completeness & quality of the software system with respect to requirements.
OR
Software Testing is a process of executing a program or application with the intend of finding error.