24 Fundamentals of Software Testing II
R. Baskaran
FUNDAMENTALS OF SOFTWARE TESTING
Software testing is the evaluation of a system with the intention of finding an error or fault or a bug. It also checks for the functionalities of the system so that it meets the specified requirements.
LEARNING OBJECTIVES
• To execute a program with the intent of finding an error.
• To check if the system meets the requirements and be executed successfully in the intended environment.
• To check if the system is “Fit for purpose”.
• To check if the system does what it is expected to do.
VERIFICATION vs. VALIDATION
Verification | Validation |
Are we building the system right? | Are we building the right system? |
Verification is the process of evaluating products of a development phase to find out whether they meet the specified requirements. | Validation is the process of evaluating software at the end of the development process to determine whether software meets the customer expectations and requirements. |
The objective of Verification is to make sure that the product being develop is as per the requirements and design specifications. | The objective of Validation is to make sure that the product actually meet up the user’s requirements, and check whether the specifications were correct in the first place. |
Following activities are involved in Verification: Reviews, Meetings and Inspections. | Following activities are involved in Validation: Testing like black box testing, white box testing, gray box testing etc. |
Verification is carried out by QA team to check whether implementation software is as per specification document or not. | Validation is carried out by testing team. |
Execution of code is not comes under Verification. | Execution of code is comes under Validation. |
Verification process explains whether the outputs
are according to inputs or not. |
Validation process describes whether the software is accepted by the user or not. |
Verification is carried out before the Validation. | Validation activity is carried out just after the Verification. |
Following items are evaluated during Verification: Plans, Requirement Specifications, Design Specifications, Code, Test Cases etc, | Following item is evaluated during Validation: Actual product or Software under test. |
Cost of errors caught in Verification is less than errors found in Validation. | Cost of errors caught in Validation is more than errors found in Verification. |
It is basically manually checking the of documents and files like requirement specifications etc. | It is basically checking of developed program based on the requirement specifications documents
& files. |
Good Testing Practices
Some of the good testing practices include:
- As the number of detected defects in a piece of software increases, the probability of the existence of more undetected defects also increases.
- Assign your best people to testing.
- Ensure that testability is a key objective in your software design.
- Never alter the program to make testing easier.
- Testing, like almost every other activity, must start with objectives.
How to Decide What to Test?
The program is divided into application layer, subsystems and low level classes and test cases are written for each layer and the layer is tested.
AUTOMATION TOOLS
The number of times a developer has written code and then sat in front of the keyboard running through test cases has happened quite often. Automation tools helps to run test cases efficiently to save resources and the time consumed. A few key tools make running tests a part of your build. Scripting, if it can automated it then it is automated. Automation tools are expected to be a powerful tool.
Scripting
Imagine you have a bunch of executable that tests various parts of your program. Imagine you’ve written a shell script to run them, one at a time. How do you know when one fails? You can check the exit code of a program running from a shell script: if [$? –ne 0]; then … else … if. If the executable exits abnormally, the then clause executes.
Strategic Approach to Testing
Testing begins at the component level and works outward toward the integration of the entire computer-based system. Different testing techniques are appropriate at different points in time. The developer of the software conducts testing and may be assisted by independent test groups for large projects. The role of the independent tester is to remove the conflict of interest inherent when the builder is testing his or her own product. Testing and debugging are different activities. Debugging must be accommodated in any testing strategy. The developer needs to consider verification issues by questioning “are we building the product right?” He also needs to consider validation issues by questioning “are we building the right product?”
Strategic Testing Issues
The issues encountered during testing involve the following:
- Specify product requirements in a quantifiable manner before testing starts.
- Specify testing objectives explicitly.
- Identify the user classes of the software and develop a profile for each.
- Develop a test plan that emphasizes rapid cycle testing.
- Build robust software that is designed to test itself (e.g. use anti-bugging).
- Use effective formal reviews as a filter prior to testing.
- Conduct formal technical reviews to assess the test strategy and test cases.
Web Links
- https://www.tutorialspoint.com/software_testing/software_testing_types.htm
- https://softwaretestingfundamentals.com
- www.softwaretestinghelp.com/types-of-software-testing/
- https://resources.sei.cmu.edu/asset_files/presentation/2014_017_101_423696.pdf
Supporting & Reference Materials
- Roger S. Pressman, “Software Engineering: A Practitioner’s Approach”, Fifth Edition, McGraw Hill, 2001.
- PankajJalote, “An Integrated Approach to Software Engineering”, Second Edition, Springer Verlag, 1997.
- Ian Sommerville, “Software Engineering”, Sixth Edition, Addison Wesley, 2000.