25 Fundamentals of Software Testing III

R. Baskaran

 

FUNDAMENTALS OF SOFTWARE TESTING

 

Software testing is the evaluation of a system with the intention of finding an error or fault or a bug. It also checks for the functionalities of the system so that it meets the specified requirements.

 

LEARNING OBJECTIVES 

 

•  To execute a program with the intent of finding an error.

•  To check if the system meets requirements and be executed successfully in the intended environment.

•  To check if the system is “Fit for purpose”.

•  To check if the system does what it is expected to do.

 

STAGES OF TESTING 

 

Various testing is done during various phases of the development cycle such as:

 

·  Module or unit testing.

·  Integration testing,

·  Function testing.

·  Performance testing.

·  Acceptance testing.

·  Installation testing.

UNIT TESTING 

 

Unit testing is the testing and validation at the unit level. Unit testing validates and tests the following:

 

·  Algorithms and logic

·  Data structures (global and local)

·  Interfaces

·  Independent paths

·  Boundary conditions

·  Error handling

·  Formal verification.

·  Testing the program itself by performing black box and white box testing.

 

INTEGRATION TESTING 

 

One module can have an adverse effect on another. Sub-functions, when combined, may not produce the desired major function. Individually acceptable imprecision in calculations may be magnified to unacceptable levels. Interfacing errors not detected in unit testing may appear in this phase. Timing problems (in real-time systems) and resource contention problems are not detectable by unit testing.

 

Top-Down Integration 

 

The main control module is used as a driver, and stubs are substituted for all modules directly subordinate to the main module. Depending on the integration approach selected (depth or breadth first), subordinate stubs are replaced by modules one at a time. Tests are run as each individual module is integrated. On the successful completion of a set of tests, another stub is replaced with a real module. Regression testing is performed to ensure that errors have not developed as result of integrating new modules.

Problems with Top-Down Integration 

 

Advantages: No test stubs are needed, Errors in critical modules are found early.

 

Disadvantages: Test drivers are needed, Interface errors are discovered late.

 

Bottom-Up Integration 

 

Integration begins with the lowest-level modules, which are combined into clusters, or builds, that perform specific software sub-function. Drivers (control programs developed as stubs) are written to coordinate test case input and output. The cluster is tested. Drivers are removed and clusters are combined moving upward in the program structure.

 

VALIDATION TESTING 

 

Validation testing determines if the software meets all of the requirements defined in the SRS. Writing down the requirements is very essential. Regression testing is performed to determine if the software still meets all of its requirements in light of changes and modifications to the software. Regression testing involves selectively repeating existing validation tests, not developing new tests.

 

ALPHA AND BETA TESTING 

 

It’s best to provide customers with an outline of the things that you would like them to focus on and specific test scenarios for them to execute. Provide with customers who are actively involved with a commitment to fix defects that they discover.

 

ACCEPTANCE TESTING 

 

Acceptance testing is similar to validation testing except that customers are present or directly involved. Usually these tests are developed by the customer

 

TEST METHODS 

 

Test methods include the following:

 

·  White box or glass box testing

·  Black box testing

·  Top-down and bottom-up for performing incremental integration

·  ALAC (Act-Like-A-Customer)

 

Test Types 

 

The types of tests include:

 

·  Functional tests

·  Algorithmic tests

·  Positive tests

·  Negative tests

·  Usability tests

·  Boundary tests

·  Startup/shutdown tests

·  Platform tests

·  Load/stress tests

 

CONCURRENT DEVELOPMENT/ VALIDATION TESTING MODEL 

 

Concurrent testing involves conducting informal validation while development is still going on. It provides an opportunity for validation tests to be developed and debugged early in the software development process which results in formal validation being less eventful, since most of the problems have already been found and fixed. This model provides early feedback to software engineers

 

Validation Readiness Review 

 

·  During informal validation developers can make any changes needed in order to comply with the SRS.

·  During informal validation QA runs tests and makes changes as necessary in order for tests to comply with the SRS.

·  During formal validation the only changes that can be made are bug fixes in response to bugs reported during formal validation testing. No new features can be added at this time.

·  During formal validation the same set of tests run during informal validation is run again.

 

No new tests are added.

 

Entrance Criteria for Formal Validation Testing

 

The entrance criteria for formal validation testing are as follows:

  • Software development must be completed (a precise definition of “completed” is required).
  • The test plan must be reviewed, approved and is under document control.
  • A requirements inspection should have been performed on the SRS.
  • Design  inspections  should  have  been  performed  on  the  SDDs  (Software  Design Descriptions).
  • Code inspections must performed on all “critical modules”.
  • All test scripts should be completed and the software validation test procedure document should be reviewed, approved, and placed under document control.
  • Selected test scripts should be reviewed, approved and placed under document control.
  • All test scripts should have been executed at least once.
  • CM tools must be in place and all source code should be under configuration control.
  • Software problem reporting procedures should be in place.
  • Validation testing completion criteria should be developed, reviewed, and approved.

 

Formal Validation

 

During formal validation, the same tests that were run during informal validation are executed again and the results recorded. Software Problem Reports (SPRs) are submitted for each test that fails. SPR tracking is performed and includes the status of all SPRs (i.e., open, fixed, verified, deferred, not a bug). For each bug fixed, the SPR identifies the modules that were changed to fix the bug. Baseline change assessment is used to ensure only modules that should have changed have changed and no new features have slipped in.

 

Informal code reviews are selectively conducted on changed modules to ensure that new bugs are not being introduced. Time required to find and fix bugs (find-fix cycle time) is tracked. Regression testing is performed using the following guidelines:

  • Use complexity measures to help determine which modules may need additional testing
  • Use judgment to decide which tests to be rerun
  • Base decision on knowledge of software design and past history

 

Web Links

  • https://www.tutorialspoint.com/software_testing/software_testing_types.htm
  • https://softwaretestingfundamentals.com
  • www.softwaretestinghelp.com/types-of-software-testing/

 

Supporting & Reference Materials

  • Roger S. Pressman, “Software Engineering: A Practitioner’s Approach”, Fifth Edition, McGraw Hill, 2001.
  • PankajJalote, “An Integrated Approach to Software Engineering”, Second Edition, Springer Verlag, 1997.
  • Ian Sommerville, “Software Engineering”, Sixth Edition, Addison Wesley, 2000.