January 15, 2016

Program Evaluation

Conducting studies to determine a program's impact, outcomes, or consistency of implementation (e.g. randomized control trials).


Program evaluations are periodic studies that nonprofits undertake to determine the effectiveness of a specific program or intervention, or to answer critical questions about a program. Program evaluations are typically conducted by independent third parties or evaluation experts and fall into two broad categories:

  • Implementation studies, which are designed to prove that a program is being implemented as designed
  • Impact studies, which are designed to establish whether a program is generating desired effects. These have various levels of rigor, the highest of which are studies that use randomize control trials to establish causality.

Note: Program evaluation is distinct from performance measurement, which is an ongoing organizational process aimed at learning and improving, typically conducted by an organization's own staff.

How it's used

Typically, a rigorous program evaluation is only conducted after the program has undergone many years of testing and refining using internal approaches geared toward continuous improvement. Once a nonprofit has strong internal evidence to suggest that its program is consistently effective, hiring an objective third party to conduct a rigorous program evaluation can help prove that the program is having the intended effect. This higher level of proof can help a nonprofit attract resources to scale up its program to reach more beneficiaries, and can be useful in advocating for changes in public policy or public funding streams.

In addition, a nonprofit can use program evaluations to test variations on its program model—e.g., longer vs. shorter dosage, delivery by more vs. less skilled staff, in-person vs. virtual models.

Depending on their design, program evaluations can help nonprofits:

  • Prove that a program is producing a positive result, i.e. that it works
  • Quantify the benefits that a program provides to individuals or society, and calculate the cost per outcome or social return on investment
  • Demonstrate which types of participants are most or least likely to benefit from a particular program
  • Isolate which elements of a program are most or least important to its success
  • Establish whether programs are being consistently implemented, with fidelity to a predetermined model or standard

Methodology

Before launching a program evaluation, many nonprofits spend years investing in continuously improving their program model. They consistently track data on key program inputs (e.g., beneficiary demographics), outputs (e.g., program participation and completion), and, to the extent possible, outcomes and impacts (e.g., knowledge gained, changes in family income), and use this to refine and strengthen their program. Once an organization has made sufficient use of internal methods of showing that its program works, it can consider commissioning a program evaluation study. There are five key steps to a strong program evaluation study

  1. Define the key questions: Clearly define what questions the evaluation will be designed to answer. Will program participants be compared to a control group of nonparticipants, or will two different program model variations be compared to each other? Interviewing and selecting a third-party evaluator (e.g., university researchers, individual experts, or firms such as MDRC) can help raise and clarify key questions for the evaluation to answer.
  2. Design the evaluation: Together with the evaluator, design a rigorous study that will answer your key questions as efficiently and affordably as possible. Different questions and program models lend themselves to different evaluation methods (e.g., randomly assigning participants to different groups, or doing pre/post comparisons). Longer study duration and larger sample sizes will allow higher levels of confidence in the results, but also increase the expense of the study.
  3. Conduct the study: Conduct the evaluation according to the design. The evaluator may collect and track all necessary data during the study period, or the nonprofit's internal data systems and staff may be part of the process.
  4. Analyze the results: Analyze the data to answer the key questions and reveal any additional key insights about the program that may emerge from the evaluation process. If the program evaluation showed high levels of effectiveness and impact, seek ways to build upon this success (e.g., strengthening or expanding the program, publicizing results to seek additional funding). If the results were unclear or negative, discuss potential causes and remedies (e.g., evaluation design changes, program model changes).
  5. Improve: Begin implementing changes to strengthen the program and the nonprofit as a whole.

Related topics

Additional resources

Abdul Latif Jameel Poverty Action Lab Executive Training: Evaluating Social Programs 2009 (MIT OpenCourseWare)
These online course materials include lecture notes, case studies, exercises, and lecture videos explaining how to evaluate social programs.

Coalition for Evidence-Based Policy: Which Study Designs Can Produce Rigorous Evidence of Program Effectiveness? A Brief Overview
This paper offers advice on the types of study designs that are capable of generating evidence of program effectiveness through randomized controlled trials or prospective, closely matched comparison-group studies.

Seven Deadly Sins of Impact Evaluation
This article cites seven potential pitfalls that can occur when organizations attempt to measure the impact of their program models.

State of Evaluation
Innonet's semi-annual report summarizes how nonprofit organizations are evaluating their programs, what they are doing with the results, and whether they believe they have the proper resources and skills necessary to properly implement evaluation processes.

Examples and case studies

Five Hurdles to Nonprofit Performance Assessment
Education nonprofit Building Educated Leaders for Life discusses the hurdles involved in its implementation of two different random-assignment assessment studies.

Hunter Consulting Case Studies
Assessment consultant David Hunter offers several evaluation case studies, covering organizations such as ROCA, Inc.; Our Piece of the Pie; and WINGS for Kids.


Creative Commons License logo
This work is licensed under a Creative Commons Attribution 4.0 International License. Permissions beyond the scope of this license are available in our Terms and Conditions.