11 Verification vs. Validation

 

VERIFICATION VS. VALIDATION (V & V)

 

Verification:

 

Verification is the process of checking that the software meets the specification. “Are we building the product right?”. Here, the software should conform to its specification.

 

Validation:

 

Validation is the process of checking whether the specification captures the customer’s needs. “Did I build what I said I would?” that is “Are we building the right product?”

 

The software should do what the user really requires.Verification is concerned with phase containment of errors. On the other hand, validation is concerned about the final product to be error free.

 

The Verification and Validation (V & V) Process

 

verification and validation (V&V) is the process of checking that a software system meets specifications and that it fulfills its intended purpose. It may also be referred to as software quality control. It answers the question, “Is a whole life-cycle process – V & V must be applied at each stage in the software process?”. For example: Peer document reviews has two principal objectives namely the discovery of defects in a system and the assessment of whether or not the system is usable in an operational situation.

 

V & V Goals

 

Verification and validation should establish confidence that the software is fit for its purpose. This does NOT mean completely free of defects. Rather, it must be good enough for its intended use. The type of use will determine the degree of confidence that is needed.

V & V Confidence

 

The level of required confidence depends on the system’s purpose, the expectations of the system users and the current marketing environment for the system:

  1. Software function The level of confidence required depends on how critical the software is to an organisation. For example, the level of confidence required for software that is used to control a safety-critical system is very much higher than that required for a prototype software system that has been developed to demonstrate some new ideas.
  1. User expectations It is a sad reflection on the software industry that many users have low expectations of their software and are not surprised when it fails during use. They are willing to accept these system failures when the benefits of use outweigh the disadvantages. However, user tolerance of system failures has been decreasing since the 1990s. It is now less acceptable to deliver unreliable systems, so software companies must devote more effort to verification and validation.
  1. Marketing environment When a system is marketed, the sellers of the system must take into account competing programs, the price those customers are willing to pay for a system and the required schedule for delivering that system. Where a company has few competitors, it may decide to release a program before it has been fully tested and debugged because they want to be the first into the market. Where customers are not willing to pay high prices for software, they may be willing to tolerate more software faults. All of these factors must be considered when deciding how much effort should be spent on the V & V process.

STATIC VS. DYNAMIC V & V

 

To satisfy the objectives of the verification and validation process, both static and dynamic techniques of system checking and analysis should be used. Static techniques are concerned with the analysis and checking of system representations such as the requirements document, design diagrams and the program source code. Dynamic techniques or tests involve exercising an implementation.

 

Static techniques include program inspections, analysis and formal verification. Some theorists have suggested these techniques should completely replace dynamic techniques in the verification and validation process and that testing is not necessary, this is not a useful point of view and could be ‘considered harmful’. Static techniques can only check the correspondence between a program and its specification (verification). They cannot demonstrate that the software is operationally useful.

 

Although static verification techniques are becoming more widely used, program testing is still the predominant verification and validation technique. Testing involves exercising the program using data like the real data processed by the program. The existence of program defects or inadequacies is inferred from unexpected system outputs. Testing may be carried out during the implementation phase to verify that the software behaves as intended by its designer. This later testing phase checks conformance with the requirements and assesses the reliability of the system.

 

Code and document inspections Concerned with the analysis of the static system representation to discover problems (static v & v) may be supplement by tool-based document and code analysis.

Software testing concerned with exercising and observing product behaviour (dynamic v & v).

 

The system is executed with test data and its operational behaviour is observed.

Static Testing

 

Under Static Testing code is not executed. Rather it manually checks the code, requirement documents, and design documents to find errors. Hence, the name “static”. Main objective of this testing is to improve the quality of software products by finding errors in early stages of the development cycle. This testing is also called as Non-execution technique or verification testing Static testing involves manual or automated reviews of the documents. This review, is done during initial phase of testing to catch defect early in Software Testing Life Cycle(STLC) . It examines work documents and provides review comments.

 

Code Walkthrough

 

Code Walkthrough is a form of peer review in which a programmer leads the review process and the other team members ask questions and spot possible errors against development standards and other issues. The meeting is usually led by the author of the document under review and attended by other members of the team.

 

Desk Checking

 

Very first step in VV&T and is particularly useful for the early stages of development. Most traditional means for analyzing a program by hand while sitting at one’s desk. Desk checking becomes  much  more  effective  if  it  is      conducted  by  another  person  or  group  of  people.

 

Commonly the developer becomes blinded to his or her own mistakes.

 

Documentation Checking

 

Deals with the assessment of the quality of all aspects of documentation other than the quality of its content. The following document quality characteristics are assessed: Accessibility, Accuracy (which is measured in terms of Verity and Validity) Completeness, Clarity which is measured in terms of Unambiguity, Understandability, Maintainability, Portability Readability.

 

Code Inspections (Static V & V)

 

Code Inspections in static validation and verification involves examining the source code with the aim of discovering anomalies and defects. Defects may be logical errors, anomalies in the code that might indicate an erroneous condition (e.g., an uninitialized variable), or non-compliance with coding standards. Intended for defect detection, not correction very effective technique for discovering errors, and saves time and money where earlier in the development process an error is found, the better.

 

Inspection Success

 

Many different defects may be discovered in a single inspection, whereas with testing one defect may mask another so that several executions/tests are required. Inspections reuse domain and programming knowledge so reviewers are likely to have seen the types of errors that commonly arise.

 

Inspections and Testing

 

Inspections and testing are complementary and not opposing verification techniques. Inspections can check conformance with a specification, but not conformance with the customer’s real requirements. Inspections cannot check non-functional characteristics such as performance, usability, etc.

 

Inspection Preparation

 

Inspection Preparation as to be follows: A precise specification must be available. Team members must be familiar with the organization’s standards. Syntactically correct code must be available. An error checklist should be prepared. Management must accept that inspection will increase costs early in the software process

Ethics Question

 

A manager decides to use the reports of program inspections as an input to the staff appraisal process. These reports show who made and who discovered program errors. Is this ethical management behavior?. Would it be ethical if the staff were informed in advance that this would happen?. What difference might it make in the inspection process?

 

Inspection Procedure

 

The inspection procedure is planned. A system overview is presented to the inspection team. Code and associated documents are distributed to inspection team in advance. Inspection takes place and discovered errors are noted. Modifications are made to repair discovered errors. Re-inspection may or may not be required.

Inspection Teams

 

Inspection Teams are made up of author of the code being inspected, Inspector who finds errors, omissions, and inconsistencies, Reader who reads the code to the team, Moderator who chairs the meeting, Scribe who makes detailed notes regarding errors, Roles may vary from these (e.g., Reader) and multiple roles may be taken on by the same member.

 

Inspection Checklist

 

The checklist of common errors should be used to drive the inspection such as error checklist is programming language dependent, the “weaker” the language type checking, the larger the checklist (e.g., C vs. Java). For examples variable initializations, constant naming, loop termination, array bounds, etc.

 

Inspection Rate

 

Measurements at IBM by M. E. Fagan: 500 statements/hour during overview, 125 source statements/hour during individual preparation, 90-125 statements/hour can be inspected, Inspection is, therefore, an expensive process and Inspecting 500 lines costs about 40 person-hours effort.

 

Automated Static Analysis

 

Static analyzers are software tools for source text processing. They parse the program text and try to discover potentially erroneous conditions. Very effective as an aid to inspections. A supplement to, but not a replacement for, inspections.

Stages of Static Analysis

 

Control flow analysis. Checks for loops with multiple exit or entry points, finds unreachable code, etc. Data use analysis. Detects uninitialized variables, variables assigned twice without an intervening use, variables that are declared but never used, etc. Interface analysis. Checks the consistency of procedure declarations and their use. Information flow analysis. Identifies the dependencies of output variables. Does not detect anomalies itself but highlights information for code inspection or review. Path analysis. Identifies paths through the program and sets out the statements executed in that path. Potentially useful in the inspection and testing processes. Both of these stages generate vast amounts of information. Must be used with care.

 

Dynamic Testing

 

Under Dynamic Testing code is executed. It checks for functional behavior of software system , memory/CPU usage and overall performance of the system. Hence the name “Dynamic”. Main objective of this testing is to confirm that the software product works in conformance with the business requirements. This testing is also called as Execution technique or validation testing. Dynamic testing executes the software and validates the output with the expected outcome. Dynamic testing is performed at all levels of testing and it can be either black or white box testing.

 

Summary

  • Most of the software errors can be removed by proof reading.
  • Verification and Validation helps to enhance the software quality.
  • Static analysis deals with verification.
  • Dynamic analysis deals with testing.
  • Static analysis must be performed before dynamic analysis.