25 Evaluation of Established Programs

Dr. Rajesh

epgp books

Content Outline

 

1. Introduction
2. Objectives
3. The Evaluability Perspective
4. Conducting Evaluability Assessments
5. Accountability and Monitoring Studies
6. Continuous versus Cross –sectional Evaluations
7. Internal versus External Evaluations
8. Conclusion

Introduction

 

A program that has been in existence for decades may also be subjected to evaluation for a variety of reasons. While the evaluation of innovative programs represents an important activity for the field, by far greater proportion of the program resource goes into the assessment of thealready established program. The evaluation efforts related to theestablished program is less visible than those connected with new programs. More are conducted “in house” either by staff connected with operating agencies or by such a governmental groups as the general accounting officer, then by the university, for profit or non-profit groups. Current conservative outlook regarding public support of socialprograms has increased pressure for scrutinizing established program. Furthermore,since many established programs are costly,evaluations that eventuate in their curtailment or modification may yield large program saving.Where for all these causes the evaluation of established program is very important to continuation of any programme.

 

Objectives

 

After going through this unit the learners will be able to understand the:

  1. Evaluation of established program and its need
  2. Objectives lying under the evaluation of established program.
  3. different perspective of evaluation of established program.

Evaluating established program requires understanding the social and political situation when they were initiated and tracking the way in which they have been modified from their emergence until the time of the evaluation. Social programs are generally historically condition responses to social concerns. Most have sprung from traditional, long –standing ameliorative effort and often there is considerable opposition from some of the stakeholders to any questioning of their fundamental assumption or the way they have been put into place. The value of guidance counsel or in schools,of vocational programs for handicapped ,of parole supervision for released convicts, and of community health education for the prevention of disease is taken for granted. Not only does general does the general public expect such program as a matter of course, but involved advocates and employees- a significant proportion of national labor force – have an investment in their continuation.Thus ,the pressures to maintain them are strong.

 

At the same time, in many human resource sectors, there are programs that clearly have outlived their usefulness or that represent poor human service investment ascurrently implemented. Many are rooted in values and intervention models that no longer are relevant; some have even lost their surface rationales and objectives over time .such programs need to be modified, discarded in favor of other programs that yield greater benefits to cost ratios.

 

A further impetus for evaluation of ongoing social programs is the current uneasiness in many quateres about their proliferation and redundancy. Spiraling costs of established programs and more severe restraints on resources- particularly public fund-require about we choose what to support ,and in what magnitude. Consequently, a variety of sake holder groups are raising serious questions about the extent to which programs operate efficiently and follow fiscal legal and operational requirement . In addition

 

constituencies and advocates of aparticular program are concerned with their impact and cost-to-benefits ratios in comparison to those programs with which they compete for sponsorship and funds. For all these reasons, policymakers responsible for resource allocations, program managers who must defend implementation, and concerned of established programs.

 

The Evaluability Perspective

 

The idea discussed in this section stream from the experiences of an evaluation research group at the Urban Institute in the 1970s, whose evaluation efforts led them to two related conclusion, first they found it difficult, sometimes impossible to evaluation of public program because manager and other stakeholder resisted, where uncooperative, or failed to grasp the purpose of the study second he found that too frequently modify the programs. This led to the view that a systematic approach, which wholly termed “evaluability assessment”, should precede most evaluation efforts.

 

Evaluability assessments or “pre-evaluations” are designed to provide a climate favorable to future evaluation work acquire intimate acquaintance with an agency or program. In addition, it also reveals whether implementation corresponds to the program as defined by those who created its policy and operational procedures; if not any evaluation that is undertaken will probably be useless.

 

An evaluability assessment requires the commitment of program staff and in many case, sponsors and relevant policymakers to collaborate in explicating objectives, describing the program and deciding on evaluation task, although it can be argued that prescribed as evaluability assessments, this is often not the case.

 

Conducting Evaluability Assessments

 

An Evaluability assessment can be looked upon as a process of closing in on what constitutes the objectives and the proper implementation and management of an existing program. It is conducted under the following steps.

  1. Preparing the program description- A description of the program is written based on formal documents, such as legislation, administrative regulations, funding proposal, published brochures, administrative manuals, annual report, minutes, and completed evaluation studied. The description includes statements identifying program objectives and cross-classifying them with program elements or components.
  2. Interviewing program personnel- Key people are interviewed to gather their descriptions of the goals and rationales of the program, as well as to identify actual program operations. From this information, models of both the intention and the actual operations of the program are developed and subsequently verified with personal interviewed.
  3. “Scouting” the program- Although evaluability assessments do not include formal research in the sense of the large –scale data collection, they do generally include site visits to obtain afirst hand impression of how programs actually operate. These impressions are collated with information from documents and interviews.
  4. Developing an evaluable program model- From the various types of information obtained, the program elements and objectives to be considered for inclusion in evaluation plans are explicated.
  5. Identifying evaluation users-Next, the purposes of evaluation activities to be undertaken, together with the key stakeholders to whom they are being directed, are identified. In addition, the ways decisions on changes would be made.
  6. Achieving agreement to proceed-Finally, the evaluation plan is reviewed with the various stake holders. The process of information collected during the course of the evaluability assessment typically includes dialogue with key individuals and groups. Thus at this point, most components of the plan have been accepted. Before the various stakeholders “sign off” on the plan,however, thee xplicit agreement should be reached on the following points:
  •  Program objectives- What the program aspire to accomplish.
  •  Program components to be analyzed, the design of the evaluation, and priorities for undertaking the work.
  •  The commitment of required from resources, together with necessary cooperation and collaboration.
  •  A plan for utilizing the evaluation results.
  •  A plan for efforts required from program staff to strengthen the evaluability potential of program component not currently amenable to evaluation, and an approach for subsequently building them into the evaluation effort.

To a considerable extent, evaluability assessment makes use of what are generically referred to as qualitative research procedures. In many ways, evaluability specialists operate like field researchers in a conventional qualitative study. That is they seek to describe and understand the program in term of “social reality” as held by the program personnel and stakeholders interviewed.

 

A Certain part of thee valuability assessment isfair “standard” such as obtaining sufficient information to document programs formal organizational structure and its informal authority and influence structure and to account for the difference between them. Acerta in statically data almost invariably are obtained including for example an accurate report on numbers and types of staff and targets served.

 

Accountability and Monitoring Studies

 

A frequent evaluation activity in established programs, conducted either as consequences of outside mandates or on the basis of an evaluability assessment, is the planning and design of an accountability study , Accountability studies are directed at providing information about various aspects of program, to stakeholders and program managers, it studies require monitoring one or more aspects of the ways programs are being implemented and the conditions under which intervention is taking place.

 

In broader way following are the most common types of accountability information that is required to obtain in study-

  1. Impact accountability- Impact evaluation of ongoing established programs is discussed in detail. At the planning stage,staff arrangements and program activities can be organized in a way that will facilitate the collection of data required for estimates of impact.
  2. Efficiency accountability- Impact in relation to program costs is obviously important both internally, in term of judging relative benefits and effectiveness against costs of different program elements, and externally, in competing for resources. In this context, adequate and valid data on program expenditures as related to program activity must be collected.
  3. Coverage accountability – the key questions here related to the number and characteristic of targets, the extent of penetration, dropout rates, and so on.
  4. Service delivery accountability- it is usually necessary to assess how the actual operation of a program conforms to program plans.
  5. Fiscal accountability- All programs have clear responsibility to account for use of fund in their fiscal reports. But in addition to what is strictly an accounting responsibility, a range of other costs questions may be pertinent under it. For example, cost per client and cost per service are data may not be gleaned from the fiscal report.
  6. Legal accountability- All programs, public and private, requires commitments in order to meet legal responsibility. These include informed consent, protection of privacy,community representation on decision-making boards, equity in the provision of services and cost sharing. In public programs, adequate compliance with legal requirement often is pre requisite for continued funding.

In  developing   accountability   strategies,  there  are  two  important    consideration:

 

continuous versus cross- sectional evaluation and internal versus external assessments.

 

Continuous versus Cross –sectional Evaluations

 

A key decision in planning accountability evaluation is whether to implement a continuous or a cross-sectional effort. Many large program employ monitoring and information system,often referred to as management information system (MIS), that allow them to assess the work and results of their programs on an ongoing basis.

 

Continuous cross sectional Evaluation study is a procedure that allows for regularly collection and maintaining information on such a matter as the characteristic of clients,their problems or reasons for seeking treatment, and their outcomes,and conducting “special” studies at asingle point of time.

 

Where the latter types of study,referred to as a cross-sectional study, generally collects data around a small window of time ,usually in conjunction with a specific evaluation activity or because of a specific need of the program manager or a request from an influential stakeholder.

 

Continuous systems are sometimes criticized as “overkill” and since they do represent a permanent commitment of resources, they need to be justified by their actual use.

 

Cross-sectional studies were undertaken from time to time many carries with them expensive start –up costs and may be resisted by the program staff since they are not perceived as part of routine operations.

 

Internal versus External Evaluations

 

Accountability evaluations raise the issue of whether programs should undertake their own evaluations or contract with outsiders. On one hand,it is clear that in accountability evaluations, the evaluator must knowagreat deal about programoperation both to design an evaluation and to engage in the consultation, education, and dialogue required maximizing its utility. On the other hand, there are the risks that an evaluator who is the part of program staff will be co-opted and that sponsor and stakeholder outside the program staff will be will be suspicious of the authenticity of findings.

 

In the end,evaluations of established programs are qualitatively the same as evaluation of innovative interventions. The three key distinctions in style are –

  • The increased emphasis on inferring a program evaluation model for existing, ongoing program activity
  • Much more deliberate attention to stakeholders views, responsibilities,and influences
  • The recognition that there frequently are important discrepancies between how programs are formally described and thought to operate by staff,targets,and the range of stakeholders and how they operate in reality.

Conclusion

 

Evaluations of established programs may focus on impact and cost-to-benefits ratios, but often assessments are limited to an examination of services delivery. In such case the evaluation centers on monitoring questions: whether or not appropriate target group are served and the extent to which program staff and management are meeting commitments with respect to quality and quantity of service delivered.

 

The human service area is highly vulnerable to serious,responsible, questioning of the way program are conducted,as well as to political or publicity-related attacks. Evaluation results, both from monitoring of program implementation and from the assessment of impact and efficiency, can influence the decision on the fate of programs and the organizations responsible for them.

you can view video on Evaluation of Established Programs