September 13, 2011

Developing Your Measurement Pathway

Our clients have found great clarity in creating, vetting, and continually refreshing over time their own "measurement pathways" to identify how performance monitoring and evaluation should unfold as their programs evolve.

By: Matthew Forti

A few months back, the ED of a human services nonprofit was approached by a board member with the following proposition: "We really need to prove our long-term results; if we could find someone to track down our alumni and see how many graduated from college, I would gladly foot the bill.” The ED wondered how to respond to the board member.

The question of "what should we do next in our measurement?" is a common one. The answer lies in what you’ve done so far and where you ultimately want to go. Our clients have found great clarity in creating, vetting, and continually refreshing over time their own "measurement pathways" to identify how performance monitoring and evaluation should unfold as their programs evolve.

Starting

In start-up phase, the focus should be on developing, vetting, and confirming the plausibility of your programmatic theory of change. In laymen's terms, this means articulating what you will hold yourself accountable to achieving and what you believe needs to happen to achieve it, and then doing all you can to verify your logic is sound (e.g., researching the demand for your services, reviewing peer programs). Unless you’re adopting or adapting a proven program, you’ll need this theory of change to decide what to measure. With indicators in place, you can then develop a performance monitoring strategy, including data collection tools, roles and responsibilities, a data system, and a plan to create a culture of, and processes for, learning and accountability. Indeed, one important pitfall to avoid at this stage is not establishing the ongoing processes for tracking, reflecting, and course correcting as needed based on what you are learning from the data. Strong indicators are simply not sufficient!

Building

In the building phase, your focus should be on implementing your performance monitoring strategy to reliably track and analyze inputs, outputs, and outcomes for each beneficiary. Before proceeding to the next phase, you should be able to show you are, by and large, reaching intended beneficiaries and they are accessing the intended services and achieving the intended outcomes. In doing this work, you’re likely to uncover barriers and opportunities that force you to reconceptualize your theory of change and recodify your program model as you prepare for early replication.

A second, yet traditionally overlooked, focus of this phase is what we call "evaluation planning." Through a few short conversations with expert evaluators in your field, you should get an indication as to whether evaluation is right for you [1], and if it is, you should sketch out what questions you want to answer, the studies that correspond and the ideal timing for each, a rough estimate of the cost, and most importantly, what you should be doing now to prepare.

Refining

Refining is all about optimizing your theory of change; for example, figuring out which beneficiaries your program most benefits, which activities most contribute to outcomes, which processes are most effective, etc. To accomplish this, you should pursue both improved performance monitoring (e.g., advanced analytics on internally collected data) and formative evaluation[2] (where a third-party assesses the strength of processes, quality and consistency of implementation, and achievement of outcomes). While some organizations publish the results of formative evaluation, the primary purpose is for leadership to learn more about how and why their programs are working, and how they can be further improved.

Scaling

Organizations that want to dramatically scale their programs typically need to attract funding, partners for replication, and other support. They’ll need summative evaluation(s) to assess effectiveness, efficiency, and replicability. They’ll also need ongoing, rigorous performance monitoring and external evaluation to test success in new sites and in new contexts, and of potential adaptations to their program. For example, even after Nurse-Family Partnership had proven its model through summative evaluation, it undertook another rigorous evaluation to test whether it was necessary to use nurses or if the same effects could be achieved with paraprofessional home visitors. (The evaluation confirmed the value of using nurses).

Of course, steps on a measurement pathway can progress slowly and circle back; in fact many longstanding organizations find they are still in the building or refining stage when it comes to measurement. Such was the case with the human services nonprofit introduced earlier, which had just implemented a data system and was still in the process of demonstrating that its beneficiaries were accessing intended services and achieving intermediate outcomes (e.g., remaining arrest free while in the program). By sketching out its measurement pathway, the organization recognized the wisest use of measurement resources was to improve performance monitoring (e.g., ensuring compliance with the new data system) and develop an evaluation plan (which refocused the organization on proving intermediate outcomes before tackling the question of whether its beneficiaries graduated from college).

A good measurement pathway can also prevent a measurement disaster. We spoke recently with an organization that undertook, at great expense and with high visibility to its funders, a summative evaluation to prove the effectiveness of its key program. Unfortunately the evaluators stopped the evaluation prematurely when they discovered the organization’s sites were implementing the program model in wildly different ways. Better evaluation planning would likely have identified the need to do a formative evaluation first.

Have you worked through your measurement pathway? What did you learn in the process?


Creative Commons License logo
This work is licensed under a Creative Commons Attribution 4.0 International License. Permissions beyond the scope of this license are available in our Terms and Conditions.