Monday, June 23, 2014

Sample Test Cases for Testing Web and Desktop Applications

Few notes to remember:
1) Execute these scenarios with different user roles e.g. admin user, guest user etc.
2) For web applications these scenarios should be tested on multiple browsers like IE, FF, Chrome, and Safari with versions approved by client.
3) Test with different screen resolutions like 1024 x 768, 1280 x 1024, etc.
4) Application should be tested on variety of displays like LCD, CRT, Notebooks, Tablets, and Mobile phones.
4) Test application on different platforms like Windows, Mac, Linux operating systems.

Comprehensive Testing Checklist for Testing Web and Desktop Applications:

Assumptions: Assuming that your application supports following functionality
- Forms with various fields
- Child windows
- Application interacts with database
- Various search filter criteria and display results
- Image upload
- Send email functionality
- Data export functionality

General Test Scenarios

1. All mandatory fields should be validated and indicated by asterisk (*) symbol
2. Validation error messages should be displayed properly at correct position
3. All error messages should be displayed in same CSS style (e.g. using red color)
4. General confirmation messages should be displayed using CSS style other than error messages style (e.g. using green color)
5. Tool tips text should be meaningful
6. Dropdown fields should have first entry as blank or text like ‘Select’
7. Delete functionality for any record on page should ask for confirmation
8. Select/deselect all records options should be provided if page supports record add/delete/update functionality
9. Amount values should be displayed with correct currency symbols
10. Default page sorting should be provided
11. Reset button functionality should set default values for all fields
12. All numeric values should be formatted properly
13. Input fields should be checked for max field value. Input values greater than specified max limit should not be accepted or stored in database
14. Check all input fields for special characters
15. Field labels should be standard e.g. field accepting user’s first name should be labeled properly as ‘First Name’
16. Check page sorting functionality after add/edit/delete operations on any record
17. Check for timeout functionality. Timeout values should be configurable. Check application behavior after operation timeout
18. Check cookies used in an application
19. Check if downloadable files are pointing to correct file paths
20. All resource keys should be configurable in config files or database instead of hard coding
21. Standard conventions should be followed throughout for naming resource keys
22. Validate markup for all web pages (validate HTML and CSS for syntax errors) to make sure it is compliant with the standards
23. Application crash or unavailable pages should be redirected to error page
24. Check text on all pages for spelling and grammatical errors
25. Check numeric input fields with character input values. Proper validation message should appear
26. Check for negative numbers if allowed for numeric fields
27. Check amount fields with decimal number values
28. Check functionality of buttons available on all pages
29. User should not be able to submit page twice by pressing submit button in quick succession.
30. Divide by zero errors should be handled for any calculations
31. Input data with first and last position blank should be handled correctly

GUI and Usability Test Scenarios

1. All fields on page (e.g. text box, radio options, dropdown lists) should be aligned properly
2. Numeric values should be right justified unless specified otherwise
3. Enough space should be provided between field labels, columns, rows, error messages etc.
4. Scroll bar should be enabled only when necessary
5. Font size, style and color for headline, description text, labels, infield data, and grid info should be standard as specified in SRS
6. Description text box should be multi-line
7. Disabled fields should be grayed out and user should not be able to set focus on these fields
8. Upon click of any input text field, mouse arrow pointer should get changed to cursor
9. User should not be able to type in drop down select lists
10. Information filled by users should remain intact when there is error message on page submit. User should be able to submit the form again by correcting the errors
11. Check if proper field labels are used in error messages
12. Dropdown field values should be displayed in defined sort order
13. Tab and Shift+Tab order should work properly
14. Default radio options should be pre-selected on page load
15. Field specific and page level help messages should be available
16. Check if correct fields are highlighted in case of errors
17. Check if dropdown list options are readable and not truncated due to field size limit
18. All buttons on page should be accessible by keyboard shortcuts and user should be able to perform all operations using keyboard
19. Check all pages for broken images
20. Check all pages for broken links
21. All pages should have title
22. Confirmation messages should be displayed before performing any update or delete operation
23. Hour glass should be displayed when application is busy
24. Page text should be left justified
25. User should be able to select only one radio option and any combination for check boxes.

Test Scenarios for Filter Criteria

1. User should be able to filter results using all parameters on the page
2. Refine search functionality should load search page with all user selected search parameters
3. When there is at least one filter criteria is required to perform search operation, make sure proper error message is displayed when user submits the page without selecting any filter criteria.
4. When at least one filter criteria selection is not compulsory user should be able to submit page and default search criteria should get used to query results
5. Proper validation messages should be displayed for invalid values for filter criteria

Test Scenarios for Result Grid

1. Page loading symbol should be displayed when it’s taking more than default time to load the result page
2. Check if all search parameters are used to fetch data shown on result grid
3. Total number of results should be displayed on result grid
4. Search criteria used for searching should be displayed on result grid
5. Result grid values should be sorted by default column.
6. Sorted columns should be displayed with sorting icon
7. Result grids should include all specified columns with correct values
8. Ascending and descending sorting functionality should work for columns supported with data sorting
9. Result grids should be displayed with proper column and row spacing
10. Pagination should be enabled when there are more results than the default result count per page
11. Check for Next, Previous, First and Last page pagination functionality
12. Duplicate records should not be displayed in result grid
13. Check if all columns are visible and horizontal scroll bar is enabled if necessary
14. Check data for dynamic columns (columns whose values are calculated dynamically based on the other column values)
15. For result grids showing reports check ‘Totals’ row and verify total for every column
16. For result grids showing reports check ‘Totals’ row data when pagination is enabled and user navigates to next page
17. Check if proper symbols are used for displaying column values e.g. % symbol should be displayed for percentage calculation
18. Check result grid data if date range is enabled

Test Scenarios for a Window

1. Check if default window size is correct
2. Check if child window size is correct
3. Check if there is any field on page with default focus (in general, the focus should be set on first input field of the screen)
4. Check if child windows are getting closed on closing parent/opener window
5. If child window is opened, user should not be able to use or update any field on background or parent window
6. Check window minimize, maximize and close functionality
7. Check if window is re-sizable
8. Check scroll bar functionality for parent and child windows
9. Check cancel button functionality for child window

Database Testing Test Scenarios

1. Check if correct data is getting saved in database upon successful page submit
2. Check values for columns which are not accepting null values
3. Check for data integrity. Data should be stored in single or multiple tables based on design
4. Index names should be given as per the standards e.g. IND_<Tablename>_<ColumnName>
5. Tables should have primary key column
6. Table columns should have description information available (except for audit columns like created date, created by etc.)
7. For every database add/update operation log should be added
8. Required table indexes should be created
9. Check if data is committed to database only when the operation is successfully completed
10. Data should be rolled back in case of failed transactions
11. Database name should be given as per the application type i.e. test, UAT, sandbox, live (though this is not a standard it is helpful for database maintenance)
12. Database logical names should be given according to database name (again this is not standard but helpful for DB maintenance)
13. Stored procedures should not be named with prefix “sp_”
14. Check is values for table audit columns (like createddate, createdby, updatedate, updatedby, isdeleted, deleteddate, deletedby etc.) are populated properly
15. Check if input data is not truncated while saving. Field length shown to user on page and in database schema should be same
16. Check numeric fields with minimum, maximum, and float values
17. Check numeric fields with negative values (for both acceptance and non-acceptance)
18. Check if radio button and dropdown list options are saved correctly in database
19. Check if database fields are designed with correct data type and data length
20. Check if all table constraints like Primary key, Foreign key etc. are implemented correctly
21. Test stored procedures and triggers with sample input data
22. Input field leading and trailing spaces should be truncated before committing data to database
23. Null values should not be allowed for Primary key column

Test Scenarios for Image Upload Functionality

(Also applicable for other file upload functionality)
1. Check for uploaded image path
2. Check image upload and change functionality
3. Check image upload functionality with image files of different extensions (e.g. JPEG, PNG, BMP etc.)
4. Check image upload functionality with images having space or any other allowed special character in file name
5. Check duplicate name image upload
6. Check image upload with image size greater than the max allowed size. Proper error message should be displayed.
7. Check image upload functionality with file types other than images (e.g. txt, doc, pdf, exe etc.). Proper error message should be displayed
8. Check if images of specified height and width (if defined) are accepted otherwise rejected
9. Image upload progress bar should appear for large size images
10. Check if cancel button functionality is working in between upload process
11. Check if file selection dialog shows only supported files listed
12. Check multiple images upload functionality
13. Check image quality after upload. Image quality should not be changed after upload
14. Check if user is able to use/view the uploaded images

Test Scenarios for Sending Emails

(Test cases for composing or validating emails are not included)
(Make sure to use dummy email addresses before executing email related tests)
1. Email template should use standard CSS for all emails
2. Email addresses should be validated before sending emails
3. Special characters in email body template should be handled properly
4. Language specific characters (e.g. Russian, Chinese or German language characters) should be handled properly in email body template
5. Email subject should not be blank
6. Placeholder fields used in email template should be replaced with actual values e.g. {Firstname} {Lastname} should be replaced with individuals first and last name properly for all recipients
7. If reports with dynamic values are included in email body, report data should be calculated correctly
8. Email sender name should not be blank
9. Emails should be checked in different email clients like Outlook, Gmail, Hotmail, Yahoo! mail etc.
10. Check send email functionality using TO, CC and BCC fields
11. Check plain text emails
12. Check HTML format emails
13. Check email header and footer for company logo, privacy policy and other links
14. Check emails with attachments
15. Check send email functionality to single, multiple or distribution list recipients
16. Check if reply to email address is correct
17. Check sending high volume of emails

Test Scenarios for Excel Export Functionality

1. File should get exported in proper file extension
2. File name for the exported Excel file should be as per the standards e.g. if file name is using timestamp, it should get replaced properly with actual timestamp at the time of exporting the file
3. Check for date format if exported Excel file contains date columns
4. Check number formatting for numeric or currency values. Formatting should be same as shown on page
5. Exported file should have columns with proper column names
6. Default page sorting should be carried in exported file as well
7. Excel file data should be formatted properly with header and footer text, date, page numbers etc. values for all pages
8. Check if data displayed on page and exported Excel file is same
9. Check export functionality when pagination is enabled
10. Check if export button is showing proper icon according to exported file type e.g. Excel file icon for xls files
11. Check export functionality for files with very large size
12. Check export functionality for pages containing special characters. Check if these special characters are exported properly in Excel file

Performance Testing Test Scenarios

1. Check if page load time is within acceptable range
2. Check page load on slow connections
3. Check response time for any action under light, normal, moderate and heavy load conditions
4. Check performance of database stored procedures and triggers
5. Check database query execution time
6. Check for load testing of application
7. Check for stress testing of application
8. Check CPU and memory usage under peak load condition

Security Testing Test Scenarios

1. Check for SQL injection attacks
2. Secure pages should use HTTPS protocol
3. Page crash should not reveal application or server info. Error page should be displayed for this
4. Escape special characters in input
5. Error messages should not reveal any sensitive information
6. All credentials should be transferred over an encrypted channel
7. Test password security and password policy enforcement
8. Check application logout functionality
9. Check for Brute Force Attacks
10. Cookie information should be stored in encrypted format only
11. Check session cookie duration and session termination after timeout or logout
11. Session tokens should be transmitted over secured channel
13. Password should not be stored in cookies
14. Test for Denial of Service attacks
15. Test for memory leakage
16. Test unauthorized application access by manipulating variable values in browser address bar
17. Test file extension handing so that exe files are not uploaded and executed on server
18. Sensitive fields like passwords and credit card information should not have auto complete enabled
19. File upload functionality should use file type restrictions and also anti-virus for scanning uploaded files
20. Check if directory listing is prohibited
21. Password and other sensitive fields should be masked while typing
22. Check if forgot password functionality is secured with features like temporary password expiry after specified hours and security question is asked before changing or requesting new password
23. Verify CAPTCHA functionality
24. Check if important events are logged in log files
25. Check if access privileges are implemented correctly

Penetration testing test cases
What is Penetration Testing?
It’s the process to identify security vulnerabilities in an application by evaluating the system or network with various malicious techniques. Purpose of this test is to secure important data from outsiders like hackers who can have unauthorized access to system. Once vulnerability is identified it is used to exploit system in order to gain access to sensitive information.
Causes of vulnerabilities:
- Design and development errors
- Poor system configuration
- Human errors

Why Penetration testing?

- Financial data must be secured while transferring between different systems
- Many clients are asking for pen testing as part of the software release cycle
- To secure user data
- To find security vulnerabilities in an application

It’s very important for any organization to identify security issues present in internal network and computers. Using this information organization can plan defense against any hacking attempt. User privacy and data security are the biggest concerns nowadays. Imagine if any hacker manage to get user details of social networking site like Facebook. Organization can face legal issues due to a small loophole left in a software system. Hence big organizations are looking for PCI compliance certifications before doing any business with third party clients.

What should be tested?
- Software
- Hardware
- Network
- Process 

Penetration Testing Types: 
1) Social Engineering: Human errors are the main causes of security vulnerability. Security standards and policies should be followed by all staff members to avoid social engineering penetration attempt. Example of these standards include not to mention any sensitive information in email or phone communication. Security audits can be conducted to identify and correct process flaws. 
2) Application Security Testing: Using software methods one can verify if the system is exposed to security vulnerabilities. 
3) Physical Penetration Test: Strong physical security methods are applied to protect sensitive data. This is generally useful in military and government facilities. All physical network devices and access points are tested for possibilities of any security breach.

Pen Testing Techniques: 
1) Manual penetration test
2) Using automated penetration test tools
3) Combination of both manual and automated process
The third process is more common to identify all kinds of vulnerabilities. 

Penetration Testing Tools: 
Automated tools can be used to identify some standard vulnerability present in an application. Pentest tools scan code to check if there is malicious code present which can lead to potential security breach. Pentest tools can verify security loopholes present in the system like data encryption techniques and hard coded values like username and password. 

Criteria to select the best penetration tool: 
- It should be easy to deploy, configure and use.
- It should scan your system easily.
- It should categorize vulnerabilities based on severity that needs immediate fix.
- It should be able to automate verification of vulnerabilities.
- It should re-verify exploits found previously.
- It should generate detailed vulnerability reports and logs.

Traceability Metrics

What is a Traceability Matrix?

The focus of any testing engagement is and should be maximum test coverage. By coverage, it simply means that we need to test everything there is to be tested. The aim of any testing project should be 100% test coverage.
Requirements Traceability Matrix to begin with, establishes a way to make sure we place checks on the coverage aspect.  It helps in creating a snap shot to identify coverage gaps.

How to Create a Traceability Matrix?

To being with we need to know exactly what is it that needs to be tracked or traced.
Testers start writing their test scenarios/objectives and eventually the test cases based on some input documents – Business requirements document, Functional Specifications document and Technical design document (optional).
Let’s suppose, the following is our Business requirements document (BRD): (Download this sample BRD in excel format)

The below is our Functional Specifications document (FSD) based on the interpretation of the Business requirements document (BRD) and the adaptation of it to computer applications. Ideally all the aspects of FSD need to be addressed in the BRD. But for simplicity’s sake I have only used the points 1 and 2.
Sample FSD from Above BRD: (Download this sample FSD in excel format)

Note: the BRD and FSD are not documented by QA teams. We are merely, the consumers of the documents along with the other projects teams.
Based on the above two input documents, as the QA team we came up with the below list high-level scenarios for us to test.
Sample Test Scenarios from the Above BRD and FSD: (Download this sample test Scenarios file)

Once we arrive here, now would be a good time to start creating the requirements traceability matrix.
I personally prefer a very simple excel sheet with columns for each document that we wish to track. Since the business requirements and functional requirements are not numbered uniquely we are going to use the section numbers in the document to track. (You can choose to track based on line numbers or bulleted-point numbers etc. depending on what makes most sense for your case in particular.)
Here is how a simple Traceability Matrix would look for our example:

http://cdn2.softwaretestinghelp.com/wp-content/qa/uploads/2013/10/simple-Traceability-Matrix.jpg 
The above document establishes a trace between, the BRD to FSD and eventually to the test scenarios. By creating a document like this, we can make sure every aspect of the initial requirements have been taken into consideration by the testing team for creating their test suites.
You can leave it this way. However, in order to make it more readable, I prefer including the section names. This will enhance understanding when this document is shared with the client or any other teams. The outcome is as below:
simple Traceability Matrix 1
Again, the choice to use the former format or the later is yours.
This is the preliminary version of your TM but generally does not serve its purpose when you stop here. Maximum benefits can be reaped from it when you extrapolate it all the way to defects.
Let’s see how.
For each test scenario that you came up with, you are going to have at least 1 or more test cases. So, include another column when you get there and write the test case IDs as shows below:

 simple Traceability Matrix 2

At this stage, the Traceability Matrix can be used to find gaps. For example, in the above Traceability Matrix you see that there are no test cases written for FSD section 1.2.
As a general rule, any empty spaces in the Traceability Matrix are potential areas for investigation. So a gap like this can mean one of the two things:
  1. The test team has somehow missed considering the “Existing user” functionality.
  2. The “Existing user” functionality has been deferred to later or removed from the application’s functionality requirements. In this case, the TM shows an inconsistency in the FSD or BRD – which means that an update on FSD and/or BRD documents should be done.
If it is scenario 1, it will indicate the places where test team needs to work some more to ensure 100% coverage.
In scenarios 2, TM not just shows gaps it points to incorrect documentation that needs immediate correction.
Let us now expand the TM to include test case execution status and defects.
The below version of the Traceability Matrix is generally prepared during or after test execution:
Requirements Traceability matrix
Download requirements traceability matrix template here: Traceability Matrix in excel format

Important Points to Note About Traceability Matrix

The following are the important points to note about this version of the Traceability Matrix:
1) The execution status is also displayed.  During execution, it gives a consolidated snapshot of how work is progressing.
2) Defects: When this column is used to establish the backward traceability we can tell that the “New user” functionality is the most flawed. Instead of reporting that so and so test cases failed, TM provides a transparency back to the business requirement that has most defects thus show casing the Quality in terms of what the client desires.
3) As a further step, you can color code the defect ID to represent their states. For example, defect ID in red can mean it is still Open, in green can mean it is closed. When this is done, the TM works as a health check report displaying the status of the defects corresponding to a certain BRD or FSD functionality is being open or closed.
4) If there is a technical design document or use cases or any other artifacts that you would like to track you can always expand the above created document to suit your needs by adding additional columns.
To sum up, a requirements traceability Matrix helps in:
  1. Ensuring 100% test coverage
  2. Showing requirement/document inconsistencies
  3. Displaying the overall defect/execution status with focus on business requirements.
  4. If a certain business and/or functional requirement were to change, a TM helps estimate or analyze the impact on the QA team’s work in terms of revisiting/reworking on the test cases.
Additionally,
  1. A TM is not a manual testing specific tool, it can be used for automation projects as well. For an automation project, the test case ID can indicate the automation test script name.
  2. It is also not a tool that can be used just by the QAs. The development team can use the same to map BRD/FSD requirements to blocks/units/conditions of code created to make sure all the requirements are developed.
  3. Test management tools like HP ALM come with the inbuilt traceability feature.
An important point to note is that, the way you maintain and update your Traceability Matrix determines the effectiveness of its use. If not updated often or updated incorrectly the tool is a burden instead of being a help and creates the impression that the tool by itself it not worthy of using.

Usage:

 1. Consider a scenario where, the client changes the requirement , something so usual in the practical world and adds a Field Recipient name to the functionality. So now you need to enter email id and name both to send a mail
2. Obviously you will need to change your test cases to meet this new requirement
3. But , by now your test case suite is very large and it is very difficult to  trace the test cases affected by the test cases
4. Instead , if the requirements were numbered  and were referenced in the test case suite it would have been very easy to track the test cases that are affected. This is nothing but Traceability
5. The traceability matrix links a business requirement to its corresponding functional requirement right up to the  corresponding test cases.
6. If a Test Case fails, traceability helps determine the corresponding functionality easily .
7. It also helps ensure that all requirements are tested.



Defect Age Metrics

Defect Age can be measured in terms of any of the following:
  • Time
  • Phases
DEFECT AGE (IN TIME)
Definition
Defect Age (in Time) is the difference in time between the date a defect is detected and the current date (if the defect is still open) or the date the defect was fixed (if the defect is already fixed).
Elaboration
  • The ‘defects’ are confirmed and assigned (not just reported).
  • Dropped defects are not counted.
  • The difference in time can be calculated in hours or in days.
  • ‘fixed’ means that the defect is verified and closed; not just ‘completed’ by the developer.
Defect Age Formula
Defect Age in Time = Defect Fix Date (OR Current Date) – Defect Detection Date
Normally, average age of all defects is calculated.
Example
If a defect was detected on 01/01/2009 10:00:00 AM and closed on 01/04/2009 12:00:00 PM, the Defect Age is 74 hours. Uses
  • For determining the responsiveness of the development/testing team. Lesser the age better the responsiveness.
DEFECT AGE (IN PHASES)
Definition
Defect Age (in Phases) is the difference in phases between the defect injection phase and the defect detection phase.
Elaboration
  • ‘defect injection phase’ is the phase in the software life cycle where the defect was introduced.
  • ‘defect detection phase’ is the phase in the software life cycle where the defect was identified.
Defect Age Formula
Defect Age in Phase = Defect Detection PhaseDefect Injection Phase
Normally, average of all defects is calculated.
Example
Let’s say the software life cycle has the following phases:
  1. Requirements Development
  2. High-Level Design
  3. Detail Design
  4. Coding
  5. Unit Testing
  6. Integration Testing
  7. System Testing
  8. Acceptance Testing
If a defect is identified in System Testing and the defect was introduced in Requirements Development, the Defect Age is 6.
Uses
  • For assessing the effectiveness of each phase and any review/testing activities. Lesser the age better the effectiveness.

Defect Density Metrics

DEFINITION
Defect Density is the number of confirmed defects detected in software/component during a defined period of development/operation divided by the size of the software/component.
ELABORATION
The ‘defects’ are:
  • confirmed and agreed upon (not just reported).
  • Dropped defects are not counted.
The period might be for one of the following:
  • for a duration (say, the first month, the quarter, or the year).
  • for each phase of the software life cycle.
  • for the whole of the software life cycle.
The size is measured in one of the following:
  • Function Points (FP)
  • Source Lines of Code
DEFECT DENSITY FORMULA

defect density image

USES
  • For comparing the relative number of defects in various software components so that high-risk components can be identified and resources focused towards them.
  • For comparing software/products so that quality of each software/product can be quantified and resources focused towards those with low quality.

Defect Detection Efficiency Metrics

DEFINITION
Defect Detection Efficiency (DDE) is the number of defects detected during a phase/stage that are injected during that same phase divided by the total number of defects injected during that phase.
ELABORATION
  • defects:
    • Are confirmed and agreed upon (not just reported).
    • Dropped defects are not counted.
  • phase:
    • Can be any phase in the software development life cycle where defects can be injected AND detected. For example, Requirement, Design, and Coding.
  • injected:
    • The phase a defect is ‘injected’ in is identified by analyzing the defects [For instance, a defect can be detected in System Testing phase but the cause of the defect can be due to wrong design. Hence, the injected phase for that defect is Design phase.]
FORMULA
  • DDE = (Number of Defects Injected AND Detected in a Phase / Total Number of Defects Injected in that Phase) x 100 %
Defect Detection Efficiency Image
UNIT
  • Percentage (%)
TARGET VALUE
  • The ultimate target value for Defect Detection Efficiency is 100% which means that all defects injected during a phase are detected during that same phase and none are transmitted to subsequent phases. [Note: the cost of fixing a defect at a later phase is higher.]
USES
  • For measuring the quality of the processes (process efficiency) within software development life cycle; by evaluating the degree to which defects introduced during that phase/stage are eliminated before they are transmitted into subsequent phases/stages.
  • For identifying the phases in the software development life cycle that are the weakest in terms of quality control and for focusing on them.
EXAMPLE
Phase Injected Defects Injected Phase Specific Activity Detected Defects Detected Phase Specific Activity Detected Defects that were Injected in the same Phase Defect Detection Efficiency
Require- ments 10 Require- ment Develop- ment 4 Require- ment Review 4 40.00%[= 4 / 10]
Design 24 Design 16 Design Review 15 62.50%[= 15 / 24]
Coding 155 Coding 23 Code Review 22 14.19%[= 22 / 155]
Unit Testing 0 25 Unit Testing
Integra- tion Testing 0 30 System Testing
System Testing 0 83 Integration Testing
Accept- ance Testing 0 5 Acceptance Testing
Opera- tion 0 3 Operation
  • The DDE of Requirements Phase is 40.00% which can definitely be bettered. Requirement Review can be strengthened.
  • The DDR of Design Phase is 62.50 % which is relatively good but can be bettered.
  • The DDE of Coding Phase is only 14.19% which can be bettered. The DDE for this phase is usually low because most defects get injected during this phase but one should definitely aim higher by strengthening Code Review. [Note: sometimes, Coding and Unit Testing phases are combined.]
  • The other Phases like Integration Testing etc do not have DDE because defects do not get Injected during these phases.

Cost of Quality Metrics

DEFINITION
Cost of Quality (COQ) is a measure that quantifies the cost of control/conformance and the cost of failure of control/non-conformance. In other words, it sums up the costs related to prevention and detection of defects and the costs due to occurrences of defects.
  • Definition by ISTQB: cost of quality: The total costs incurred on quality activities and issues and often split into prevention costs, appraisal costs, internal failure costs and external failure costs.
  • Definition by QAI: Money spent beyond expected production costs (labor, materials, equipment) to ensure that the product the customer receives is a quality (defect free) product. The Cost of Quality includes prevention, appraisal, and correction or repair costs.
EXPLANATION
  • Cost of Control (Also known as Cost of Conformance)
    • Prevention Cost
    • Appraisal Cost
      • The cost arises from efforts to detect defects.
      • Example: Quality Control costs
  • Cost of Failure of Control (Also known as Cost of Non-Conformance)
    • Internal Failure Cost
      • The cost arises from defects identified internally and efforts to correct them.
      • Example: Cost of Rework (Fixing of internal defects and re-testing)
    • External Failure Cost
      • The cost arises from defects identified by the client or end-users and efforts to correct them.
      • Example: Cost of Rework (Fixing of external defects and re-testing) and any other costs due to external defects (Product service/liability/recall, etc)
FORMULA / CALCULATION
 Cost of Quality (COQ) = Cost of Control + Cost of Failure of Control
 where
Cost of Control = Prevention Cost + Appraisal Cost
 and
Cost of Failure of Control = Internal Failure Cost + External Failure Cost





NOTES
  • In its simplest form, COQ can be calculated in terms of effort (hours/days).
  • A better approach will be to calculate COQ in terms of money (converting the effort into money and adding any other tangible costs like test environment setup).
  • The best approach will be to calculate COQ as a percentage of total cost. This allows for comparison of COQ across projects or companies.
  • To ensure impartiality, it is advised that the Cost of Quality of a project/product be calculated and reported by a person external to the core project/product team (Say, someone from the Accounts Department).
  • It is desirable to keep the Cost of Quality as low as possible. However, this requires a fine balancing of costs between Cost of Control and Cost of Failure of Control. In general, a higher Cost of Control results in a lower Cost of Failure of Control. But, the law of diminishing returns holds true here as well.

Software Testing Myths and Facts


Just as every field has its myths, so does the field of Software Testing. Software testing myths have arisen primarily due to the following:
  • Lack of authoritative facts.
  • Evolving nature of the industry.
  • General flaws in human logic.
Some of the myths are explained below, along with their related facts:
  1. MYTH: Quality Control = Testing.
    • FACT: Testing is just one component of software quality control. Quality Control includes other activities such as Reviews.
  2. MYTH: The objective of Testing is to ensure a 100% defect- free product.
    • FACT: The objective of testing is to uncover as many defects as possible. Identifying all defects and getting rid of them is impossible.
  3. MYTH: Testing is easy.
    • FACT: Testing can be difficult and challenging (sometimes, even more so than coding).
  4. MYTH: Anyone can test.
    • FACT: Testing is a rigorous discipline and requires many kinds of skills.
  5. MYTH: There is no creativity in testing.
    • FACT: Creativity can be applied when formulating test approaches, when designing tests, and even when executing tests.
  6. MYTH: Automated testing eliminates the need for manual testing.
    • FACT: 100% test automation cannot be achieved. Manual Testing, to some level, is always necessary.
  7. MYTH: When a defect slips, it is the fault of the Testers.
    • FACT: Quality is the responsibility of all members/stakeholders, including developers, of a project.
  8. MYTH: Software Testing does not offer opportunities for career growth.
    • FACT: Gone are the days when users had to accept whatever product was dished to them; no matter what the quality. With the abundance of competing software and increasingly demanding users, the need for software testers to ensure high quality will continue to grow.