26 Software Quality Metrics
Software Quality Metrics: Two alternative but complementary definitions (IEEE, 1990) describe quality metrics as a category of SQA tools:
- (1) A quantitative measure of the degree to which an item possesses a given quality attribute.
- (2) A function whose inputs are software data and whose output is a single numerical value that can be interpreted as the degree to which the software possesses a given quality attribute.
The main objective of Software Quality Metrics is to facilitate management control, planning and managerial intervention. It is based on the deviations of actual from planned performance, deviations of actual timetable and budget performance from planned. Identify situations for development or maintenance process improvement (preventive or corrective actions). It is based on accumulation of metrics information regarding the performance of teams, units, etc.
In order for the selected quality metrics to be applicable and successful, both general and operative requirements must be satisfied.
Classifications of Software Quality comprises of:
- Process metrics –related to the software development process
- Product metrics –related to software maintenance
- Classification by subjects of measurements
- Quality
- Timetable
- Effectiveness
- Productivity
- A sizeable number of software quality metrics involve one of the two following measures for system size:
- KLOC
- Function point
Software development process metrics can fall into one of the following categories:
- Software process quality metrics
- Software process timetable metrics
- Error removal effectiveness metrics
- Software process productivity metrics.
Software process quality metrics may be classified into two classes:
- Error density metrics
- Error severity metrics
Calculation of error density metrics involves two measures: (1) software volume, and (2) errors counted. In Software volume measures some density metrics use the number of lines of code while others apply function points. In Errors counted measures. Some relate to the number of errors and others to the weighted number of errors. Weighted measures that ascertain the severity of the errors are considered to provide more accurate evaluation of the error situation. A common method applied to arrive at this measure is classification of the detected errors into severity classes, followed by weighting each class. The weighted error measure is computed by summing up multiples of the number of errors found in each severity class by the adequate relative severity weight.
NCE -The number of code errors detected by code inspections and testing.
NDE – Total number of development (design and code) errors detected in the development process.
WCE – Weighted total code errors detected by code inspections and testing.
WDE – Total weighted development (design and code) errors detected in development process.
Software process timetable metrics may be based on accounts of success (completion of milestones per schedule) in addition to failure events (non- completion per schedule). An alternative approach calculates the average delay in completion of milestones.
MSOT -Milestones completed on time. MS = Total number of milestones.
TCDAM – Total Completion Delays (days, weeks, etc.) for all milestones.
Software developers can measure the effectiveness of error removal by the software quality assurance system after a period of regular operation (usually 6 or 12 months) of the system. The metrics combine the error records of the development stage with the failures records compiled during the first year (or any defined period) of regular operation.
NDE – total number of development (design and code) errors) detected in the development process.
WCE – weighted total code errors detected by code inspections and testing.
WDE – total weighted development (design and code) errors detected in development process.
NYF – number software failures detected during a year of maintenance service.
WYF – weighted number of software failures detected during a year of maintenance service.
Software process productivity metrics includes “direct” metrics that deal with a project’s human resources productivity as well as “indirect” metrics that focus on the extent of software reuse. Software reuse substantially affects productivity and effectiveness.
DevH- Total working hours invested in the development of the software system.
ReKLOC – Number of thousands of reused lines of code.
ReDoc -Number of reused pages of documentation.
NDoc- Number of pages of documentation.
Product metrics refer to the software system’s operational phase – years of regular use of the software system by customers, whether “internal” or “external” customers, who either purchased the software system or contracted for its development. In Help desk services (HD) – Software support by instructing customers regarding the method of application of the software and solution of customer implementation problems. Demand for these services depends on the quality of the user interface as well as the quality of the user manuals. In Corrective maintenance services – The number of software failures and their density are directly related to software development quality. For completeness of information and better control of failure correction, all software failures detected by the customer service team be recorded as corrective maintenance calls.
NHYC- the number of HD calls during a year of service.
KLMC- Thousands of lines of maintained software code.
WHYC- weighted HD calls received during one year of service.
NMFP- number of function points to be maintained.
As for size/volume measures of the software, some use number of lines of code while others apply function points. The sources of data for these and the other metrics in this group are HD reports. The Severity of HD calls metrics belongs to the group of measures that aim at detecting one type of adverse situation: increasingly severe HD calls.
Productivity metrics relate to the total of resources invested during a specified period, while effectiveness metrics relate to the resources invested in responding to a HD customer call. HD productivity metrics makes use of the easy-to-apply KLMC measure of maintained software system’s size or according to function point evaluation of the software system. HD effectiveness metrics refer to the resources invested in responding to customers’ HD calls.
HDYH- Total yearly working hours invested in HD servicing of the software system.
KLMC – Thousands of lines of maintained software code.
NMFP -number of function points to be maintained.
NHYC -the number of HD calls during a year of service.
Software corrective maintenance metrics deal with several aspects of the quality of maintenance services. A distinction is needed between software system failures treated by the maintenance teams and failures of the maintenance service that refer to cases where the maintenance failed to provide a repair that meets the designated standards or contract requirements.
NYF- number of software failures detected during a year of maintenance service.
WYF- weighted number of yearly software failures detected during one year of maintenance service.
NMFP- number of function points designated for the maintained software.
KLMC – Thousands of lines of maintained software code.
General limitations of Software Quality metrics consists of the budget constraints in allocating the necessary resources, human factors, especially opposition of employees to evaluation of their activities, validity Uncertainty regarding the data’s, partial and biased reporting.
Various factors affecting Parameters Used for Development Process Metrics are:
- Programming style (KLOC).
- Volume of documentation comments (KLOC).
- Software complexity (KLOC, NCE).
- Percentage of reused code (NDE, NCE).
- Professionalism and thoroughness of design review and software testing teams.
- Reporting style of the review and testing results: concise reports vs. Comprehensive reports (NDE, NCE).
- Quality of installed software and its documentation (NYF, NHYC).
- Programming style and volume of documentation comments included in the code be maintained (KLMC).
- Software complexity (NYF).
- Percentage of reused code (NYF).
- Number of installations, size of the user population and level of applications in use: (NHYC, NYF).
Summary:
Software quality metrics are implemented to support control of software development projects and software maintenance. Applicability of quality metrics is determined by the degree to which general and operative requirements are fulfilled. The process of defining a new software quality metric and the limitations of using such metrics have been discussed. The reasons for the limitation of characterizing some of the software quality metrics have also been discussed thoroughly.