24 Evaluation of Innovative Programs

Dr. Rajesh

epgp books
Content Outline
1. Introduction
2. Objectives
3. Program Evaluation
4. Evaluation of innovative Programs
5. Making objectives of the program operational
6. Developing an Impact Model
7. Defining Target Populations
a. Overinclusion
b. Underinclusion
8. Designing Delivery System
9. Developing the Evalution plan
10. The logical way for innovative Programme Evaluation
a. Getting Started
b. Clarification of Program Logic
c. Setting Goals and Indicators
d. Process
e. Outcomes
f. Impacts
11. CONCLUSION

 

Introduction

 

Innovation is an ever-changing phenomenon. It takes place in a dynamic and constantly evolving system that is adapting to a range of internal and external factors. , it is a complex phenomenon which evolve strong desire to know what works and how to make it work better Innovation in is a matter of doing new things, or finding new ways of doing familiar things.As innovation has become a core goal of policy, so Innovation Programmes have become important tools for realising innovation policies, recently it has seen that policy makers struggle to improve the performance in particular to help firms to become more innovative to draw upon science and technology in the enhancement of their competitiveness.Where the evaluation of innovative programme helps in finding the solutions of various problems.

 

The development of evaluation research emerged from the general acceptance of the scientific method as a means of dealing with social problems. Commitment of the systematic evaluation of programs first became common place in the field of education and public health. Early efforts, beginning period to world war first, the famous western Electric experiment that contributed the term “ Hawthorn Effect” revolutionized the evaluation in social sciences, the period immediately following world war second saw the beginning of large scale programs designed to meet needs for urban development and housing , technological and cultural education, occupational training, and preventive health led to the evaluation of innovative programme, by the end of 1950 large –scale evaluation program were commonplace. With the evolution of technology and innovation policy, beginning in the 1980s with a pre-occupation with large-scale pre-competitive and gradually broadening the range of instruments employed through an increasing focus on enhancing the environment for innovation and technology transfer and the evaluation of innovative program measurement of various aspects of ongoing program objective to meet the target.

 

Objectives

 

After going through this unit the learners will be able to:

  1. innovative evaluation means and process
  2. Understand that why there is need of evaluation of innovative program.
  3. Identify the Objectives of innovative evaluation.
  4. Logic method for theEvaluation of innovativeProgramme.

Program evaluation is a systematic method for collecting, analysing, and using information to answer questions about projects, policies and programs,particularly about their effectiveness and efficiency. In both the public and private sectors, stakeholders often want to know whether the programs they are funding, implementing, voting for, receiving or objecting to are producing the intended effect. While program evaluation first focuses around this definition, important considerations often include how much the program costs per participant, how the program could be improved, whether the program is worthwhile, whether there are better alternatives, if there are unintended outcomes, and whether the program goals are appropriate and useful

 

This is the systematic assessment of a programme to determine

(1) how far it is meeting objectives (and, perhaps, achieving other effects),

(2) whether it is meeting these objectives efficiently and effectively), and

(3) How the management structures (and other factors) have shaped these results.

Evaluation may be ex ante, monitoring, ex post, etc.; and it may focus on a single project or programme, or extend more broadly to encompass the policy framework within which these are located.

Evaluation is, at heart, a matter of seeing how well a policy or programme has achieved the objectives set for it. There is, however, a little more to it than this. It deals and differentiates between three aspects of evaluation – evaluation of effectiveness, of efficiency, and of efficacy.

 

The features of innovative evaluation are:

  • Evaluation can examine how efficiently the programme operated, exploring the management of the activities to see whether there was good communication of objectives and progress, avoidance of redundancy and ability to detect and address problems as they arose.
  • Evaluation can examine the effectiveness of the programme, how far the objectives have been met. This is easier if the objectives have been set in quantitative or verifiable terms, and/or if there are related programmes against which it can be benchmarked. In assessing programme effectiveness, it is common to use performance indicators.
  • Evaluation can go further and consider the efficacy of the programme: how relevant it is to the broad policy goals to which it was designed to contribute.
  • Beyond these very pragmatic goals, evaluation can also involve explanation. It can cast light on how and why the programme achieved the degree of effect that was realised. It can illuminate issues of programme management, and, more profoundly, raise the understanding of the broader innovation system within which the programme has operated.
  • Evaluation may be used in what may seem to be a simplistic manner, but can prove to be technically very demanding to operationalise, to attempt to examine the precise value for money that has been achieved (in terms of an estimate of the rate of return of the public expenditure).
  • Evaluation can also be used to examine unintended consequences of the programme intervention: benefits and costs of the activities that were not expected by the programme designers, or not explicitly spelled out as being among the programme objectives.

Evaluation of innovative Programs

 

Most programs are introduced as “new and innovative” are modification of existing practices. What makes an intervention innovative in our sense is that the particular treatment has never been applied to population specified. It may have been tried as a small- scale,impressionistically judged demonstration but, never with the realistic intent of implementation on a broad scale.

In simple term, a programme is innovative if it has not been subject to implementation and assessment in the following ways:

  1. The intervention itself is still in an emerging or research and Development phase.
  2. The delivery system, or parts of it, has not been adequately tested- for example, a program that include the untested idea of having high school students provide nutritional education and information to the elderly.
  3. The targets of program are markedly new or expended. An intervention of this type might offer cassette-recorded language training to immigrant school children who are not present in large enough number to justify bilingual educational programs in individual schools.
  4. A program originally undertaken in response to one goal is continued or expanded because of its impact on another objective. For instance a program provide marked automobile to police for their personal use may have been initiated to cut the crime rate , but is continued to curtail job instability and keep police close to their precincts.

Planning, designing, and testing new programs, the evaluator must be capable of understanding a wide range of activities. These activities vary depending on the type program, the relationship of the evaluator to the program, the time period available before implementation, the programs political and resource demands, and particular skills of the program staff and evaluation groups. In most evaluation of innovative programs, the evaluator will participate in many if not all the tasks and it involves-

  1. Making objectives of the program operational
  2. Developing an impact model, including causal, intervention and action hypotheses
  3. Adjusting the definition of the target population and anticipating client acceptance problems
  4. Designing the delivery system
  5. Specifying procedure for monitoring the program
  6. Assessing impact and estimating efficiency.

 

Making objectives of the program operational

 

Objectives may be defined in either absolute or relative terms. Absolute objectives require either that an undesired condition be totally eliminated or that a desire one be attained every one.

 

Relative objective in contrast establish standers of achievement in term of some personal improvement in the existing conditions. Setting goals and specifying objectives requireseither as gumption or knowledge about two fundamental aspects of the social situation: value and existing condition.

 

There are a number of formal ways to establish objectives, the full technical details. There are some approaches of define the objectives i.e. Decision theoretic approach, this approach permits the objectives of diverse groups to be explicated and ranked. Eachgropes first defines and ranks its own objectives.

 

Another formal approach, evaluability assessment, seeks to produce evaluation with maximal potential utility

 

Where goal attainment scaling makes it possible to tailor goals to individual units within the target population. The result can be can be summarized to provide a composite estimate of program impact. The approach uses relative rather than absolute measures. Evalution researcher often refers to operationalized statements as objectives.

 

Developing an Impact Model

 

An intervention or impact model is an attempt to translate conceptual idea regarding the regulation, modification and control of behaviour or conditions into hypothesis on which action can be based. Fully explicated model are rare. Too often, the intervention “model” consists of nothing more than the assumption underlying a program operation. These assumption may have been drawn from previous studies- often undertaken on small sample or in other local- or many have little empirical basis, being drawn instead from the untested way in which practitioners have performed in past.

 

An impact model takes the form of statement about the expected relationship between program and its goal; it must contain a causal hypothesis, an intervention hypothesis, and an action hypothesis.

 

Where casual hypothesis can be found in the relevant substantive scientific literature concerning the socialproblems.Especially critical for the purpose of program design are those causal hypotheses that relate the problem to processes that program can effect.

 

The intervention hypothesis is a statement that specifies the relationship between what is going to be done in the program and the process or determinants specified in the causal hypothesis of behaviour or condition to be ameliorated or changed,.

 

A third kind of hypothesis , the action hypothesis , it is needed for assessing whether the intervention- even if it results in a desired change in causal variable- is necessarily linked to the outcome, that is, to changes in the behaviour or condition to be modified.

 

Defining Target Populations

 

During the design stage , the issue of target population definition surfaces once again, for adjustment may have been made in the definition in light of the of the specific intervention being planned. For example, although homelessness is a problem that affects people of all ages, an impact model might suggest that very little could be done for all those who are over 40 an imply concentrating effort on the young. The interplay between defining the target population and developing an impact model is so strong that in some ways the distinction between the two tasks is artificial. The impact model must include a set of hypothesis about the plausibility of the event leading to another. Programs are most efficient and potentially effective when the targets they reach are restricted entirely to units that can benefit from the intervention-that is, when the program is neither overinclusive nor underinclusive.

 

Overinclusion

 

Especially in the case of projects for which resources are insufficient to cover all potential targets, selection is generally regarded as most efficient if treatment is given mainly to target with highest probability of successful outcome. Such an approach maximizes the like hood offavorable costs-to-benefits ratios and the probability that a positive impact can be demonstrated.

Underinclusion

 

While we want program to be efficient, and therefore not overinclusive,we also want them to be effective, and therefore not underinclusive. Not only may underinclusion deny opportunities for the participation of targets at need or highly at risk, but there is also the tradoff already noted between selection costs and resources available for treatment delivery.

 

Although some programs are designed to cover every eligible target whereas others depend on voluntary participation, there is a voluntary aspect to almost all programs. For example an active act of participation is required; an eligible pregnant woman receives free prenatal care only if she actively seeks out and participation in relevant program.

 

Designing Delivery System

 

Interventions, no matter how well conceived, cannot be effective and efficient without carefully developed delivery system. The delivery systems of the program consist of those program procedures and activities that are used to identify target and provide treatment. Some delivery systems are comparatively simple, particularly when targets are “semi- captives”: providing health education in classroom settings is a comparatively simple proposition. Other delivery system are highly complex: providing special health care for prospective mothers experiencing high-risk pregnancies may require obtaining help from family physician obstetrical and pediatric specialists, generalhospital, andcentres specializing in infant care.

 

Developing the Evalution plan

 

A key outcome of the planning stage of an innovative program should be the plan for conducting the evaluation. The plan may call for comprehensive Evalution that involves all of the evaluation tasks discussed, it may include only selected activities such as procedures for monitoring the program and for undertaking an impact evalution.Planing of the program and the development of its Evalution plan go hand in hand. The evaluator however on the basis of her understanding of statical inferences, may believe that this group size not sufficient to yield firm results regarding the impact of the program and thus urge reconsideration of the size of target population. Convdfsely the evaluator may believe that the planned target population is too large and will not leave enough. Unexposed target to permit the comparison of “experimental” and “control” groups.

 

Planning g the Evalution together with the development of the program provides both evaluation and program staff with realistic expectations concerning the  Evalutionrequirements and the resources that need to be allocated to the Evalution. In such case there is less likelihood of conflict between two groups.

 

The logical way for innovative Programme Evaluation

 

Getting Started

 

This involves thinking about the types of things one will need: money, time, evaluation expertise, support from program managers and staff, and in some cases, official approval for the evaluation. As one prepares to start, some consideration into the following is essential:

  • Is there sufficient experience to carry out the evaluation?
  • How much time will be dedicated to the evaluation?
  • Is there adequate budget for the evaluation?
  • Is an experienced internal/external evaluator available to work with?
  • How can program managers, staff and others be involved?
  • How will approval for the evaluation and consent from participants be obtained?

Clarification of Program Logic

 

Here one need to have a solid understanding of what the program is trying to accomplish and whether the program is achieving the set goals. Creating the logic modelhelp in articulating how the program is intended to work and consequently, helps in identifying which aspects to focus on in the evaluation.

Setting Goals and Indicators

What are the goals for the evaluation?

To answer this, consider the following questions which will help to refine the evaluation goals:

  • Why is the evaluation being planned? Is it for accountability, to document the program’s results to an organization or funder? Is it to learn if the program is on the right track, to assess the program’s accomplishments, to improve the program, or something else?
  • Who will use the evaluation’s results? Program managers? Staff? Current or potential funders? Government agencies? Teachers? School administrators?

…What kind of data will you need to collect to meet the needs of these different stakeholders? What information will they find most credible and easy to understand?

The answers to these questions influence the methods that can be used to carry out the evaluation. If one decides, for example, that the goal is to generate evidence of the program’s success, one may then want to focus on an outcome one is confident the program is achieving and select methods that will allow one to generalize results to all program participants. Alternatively, if the goal is to improve specific aspects of the program, one may want to focus on these by obtaining in-depth recommendations from participants and staff.

 

Process:

 

Process questions are concerned with how well the program is being delivered. Indicators may measure the number of outputs generated, participant and partner satisfaction with these outputs, or other aspects of program implementation.

Outcomes:

 

Outcome focused questions look for evidence that the program is benefiting participants, visitors, etc. The questions and indicators below look for evidence of change in individuals’ awareness and behaviours over time.

Impacts:

 

Impacts are the broader, long-term changes that your program has on the community or the environment. The questions and indicators below look for evidence that environmental quality has improved over time.

CONCLUSION

 

Evaluation of innovative programme is the systematic application of social research methods to the assessment of social science intervention programs. It draw draws upon the techniques and concept of several disciplines and is useful at every stage in conceptualization, design, planning, and implementation of programs, where with the help of evaluation, the evaluators do their works in a constantly shifting decision-making environment, in a pragmatic view sees evaluation as necessarily rooted in scientific methodology but responsive to resource constraints, the need and purposes of stakeholders and the nature of the evaluation setting. So for the successful implementation and execution of innovative programme, it is vital to have a strong directional evaluation system to guide the successful achievement of their objectives.

 

you can view video on Evaluation of Innovative Programs