Reasons conduct program evaluation


















Purpose-driven organizations can use this combo in a multitude of ways, to celebrate how the organization is delivering on its mission promise through the delivery of its unique programs and services. Funders are increasingly requiring that the nonprofits they support hire external evaluators to prove that they are in fact achieving what they set out to do, and to ensure that the funding is being put to the best use possible.

So organizations can be both in direct competition for increased funding, and also part of a community of organizations funded by a foundation, all working toward a common goal. Having an independent evaluation partner can help you manage your performance over time, so that you can implement continuous improvement efforts that will result in more successful outcomes. Your organization will also become more competitive when accessing new sources of funding.

As the external evaluator of choice for social sector organizations across the nation, Measurement Resources can help you clarify your goals in advance of implementing your next program evaluation. Contact us today to learn how we can help you with your next program evaluation. Quantify how your program has changed lives and circumstances. Increasingly, public health programs are accountable to funders, legislators, and the general public. Many programs do this by creating, monitoring, and reporting results for a small set of markers and milestones of program progress.

Linking program performance to program budget is the final step in accountability. The early steps in the program evaluation approach such as logic modeling clarify these relationships, making the link between budget and performance easier and more apparent. While the terms surveillance and evaluation are often used interchangeably, each makes a distinctive contribution to a program, and it is important to clarify their different purposes. Surveillance is the continuous monitoring or routine data collection on various factors e.

Surveillance systems have existing resources and infrastructure. Data gathered by surveillance systems are invaluable for performance measurement and program evaluation, especially of longer term and population-based outcomes.

There are limits, however, to how useful surveillance data can be for evaluators. Also, these surveillance systems may have limited flexibility to add questions for a particular program evaluation. In the best of all worlds, surveillance and evaluation are companion processes that can be conducted simultaneously.

Evaluation may supplement surveillance data by providing tailored information to answer specific questions about a program. Data from specific questions for an evaluation are more flexible than surveillance and may allow program areas to be assessed in greater depth. Evaluators can also use qualitative methods e. Both research and program evaluation make important contributions to the body of knowledge, but fundamental differences in the purpose of research and the purpose of evaluation mean that good program evaluation need not always follow an academic research model.

Research is generally thought of as requiring a controlled environment or control groups. In field settings directed at prevention and control of a public health problem, this is seldom realistic. Of the ten concepts contrasted in the table, the last three are especially worth noting.

Unlike pure academic research models, program evaluation acknowledges and incorporates differences in values and perspectives from the start, may address many questions besides attribution, and tends to produce results for varied audiences. Program staff may be pushed to do evaluation by external mandates from funders, authorizers, or others, or they may be pulled to do evaluation by an internal need to determine how the program is performing and what can be improved.

While push or pull can motivate a program to conduct good evaluations, program evaluation efforts are more likely to be sustained when staff see the results as useful information that can help them do their jobs better.

Data gathered during evaluation enable managers and staff to create the best possible programs, to learn from mistakes, to make modifications as needed, to monitor progress toward program goals, and to judge the success of the program in achieving its short-term, intermediate, and long-term outcomes. Most public health programs aim to change behavior in one or more target groups and to create an environment that reinforces sustained adoption of these changes, with the intention that changes in environments and behaviors will prevent and control diseases and injuries.

Through evaluation, you can track these changes and, with careful evaluation designs, assess the effectiveness and impact of a particular program, intervention, or strategy in producing these changes.

The Working Group prepared a set of conclusions and related recommendations to guide policymakers and practitioners. Program evaluation is one of ten essential public health services [8] and a critical organizational practice in public health. The underlying logic of the Evaluation Framework is that good evaluation does not merely gather accurate evidence and draw valid conclusions, but produces results that are used to make a difference.

You determine the market by focusing evaluations on questions that are most salient, relevant, and important. You ensure the best evaluation focus by understanding where the questions fit into the full landscape of your program description, and especially by ensuring that you have identified and engaged stakeholders who care about these questions and want to take action on the results.

The steps in the CDC Framework are informed by a set of standards for evaluation. The 30 standards cluster into four groups:. Utility: Who needs the evaluation results? Will the evaluation provide relevant information in a timely manner for them? Feasibility: Are the planned evaluation activities realistic given the time, resources, and expertise at hand?

Propriety: Does the evaluation protect the rights of individuals and protect the welfare of those involved? Does it engage those most directly affected by the program and changes in the program, such as participants or the surrounding community? Accuracy: Will the evaluation produce findings that are valid and reliable, given the needs of those who will use the results?

Sometimes the standards broaden your exploration of choices. Often, they help reduce the options at each step to a manageable number. Feasibility How much time and effort can be devoted to stakeholder engagement? Propriety To be ethical, which stakeholders need to be consulted, those served by the program or the community in which it operates? Accuracy How broadly do you need to engage stakeholders to paint an accurate picture of this program?

Similarly, there are unlimited ways to gather credible evidence Step 4. Asking these same kinds of questions as you approach evidence gathering will help identify ones what will be most useful, feasible, proper, and accurate for this evaluation at this time. Thus, the CDC Framework approach supports the fundamental insight that there is no such thing as the right program evaluation. As you set goals, objectives, and a desired vision of the future for your program, identify ways to measure these goals and objectives and how you might collect, analyze, and use this information.

This process will help ensure that your objectives are measurable and that you are collecting information that you will use. Strategic planning is also a good time to create a list of questions you would like your evaluation to answer. See Step 2 to make sure you are on track.

Update these documents on a regular basis, adding new strategies, changing unsuccessful strategies, revising relationships in the model, and adding unforeseen impacts of an activity EMI, It describes features of an organizational culture, and explains how to build teamwork, administrative support and leadership for evaluation.

It discusses the importance of developing organizational capacity for evaluation, linking evaluation to organizational planning and performance reviews, and unexpected benefits of evaluation to organizational culture. If you want to learn more about how to institutionalize evaluation, check out the following resources on adaptive management. Adaptive management is an approach to conservation management that is based on learning from systematic, on-going monitoring and evaluation, and involves adapting and improving programs based on the findings from monitoring and evaluation.

Downloaded September 20, from: www. Patton, M. Qualitative Research Evaluation Methods. Thomson, G. Measuring the success of EE programs. Canadian Parks and Wilderness Society. Skip to main content. Evaluation: What is it and why do it? Table of Contents What is evaluation? Should I evaluate my program? What type of evaluation should I conduct and when?

What makes a good evaluation? How do I make evaluation an integral part of my program? How can I learn more? What is evaluation? Experts stress that evaluation can: Improve program design and implementation. Demonstrate program impact.

Within the categories of formative and summative, there are different types of evaluation. Which of these evaluations is most appropriate depends on the stage of your program: Type of Evaluation Purpose Formative 1.

Needs Assessment Determines who needs the program, how great the need is, and what can be done to best meet the need. For more information, Needs Assessment Training uses a practical training module to lead you through a series of interactive pages about needs assessment. Process or Implementation Evaluation Examines the process of implementing the program and determines whether the program is operating as planned.

Can be done continuously or as a one-time assessment.



0コメント

  • 1000 / 1000