February 5, 2014

Making Measurement Work in Large, Complex Organizations

Four tips on measuring impact for NGOs that have multiple programs across many sites or continents.

By: Matthew Forti

This blog post originally appeared January 16, 2014, on the Stanford Social Innovation Review website.

By Nabihah Kara and Matt Forti

It's one thing for large social-sector organizations to embrace the idea of measurement as a way to enhance program impact. It’s entirely another for them to figure out how to design and implement measurement systems for multisite, multiservice, and even global organizations. One NGO leader we spoke with described the experience as "wading through the measurement mess."

While there's no easy fix for this "measurement mess," multiservice global NGOs we've studied and advised have forged ways to cut through the complexity—here's how:

1. Set clear learning agendas.
Measurement without purpose is like a car without wheels—a frame that will never reach a destination. To guide effective decision-making, it must be clear what decisions the data collection process will help inform and what measurement approach will provide the best data. In the case of multiservice NGOs, each program or sector needs such a learning agenda. The top priorities of each sector will shape the organization-wide learning agenda that leadership should focus on.

The Aga Khan Development Network (AKDN), a nondenominational organization working across 12 development sectors in 30 countries, recently developed a learning agenda around understanding holistic quality of life for its beneficiaries. Since AKDN is often the sole provider of economic, social, and cultural services in the regions where it works, its quality of life assessments provide critical insight into beneficiary needs and aspirations. AKDN assembles these insights into a learning agenda for implementing intervention strategies across its network of programs. The result is alignment around how and why the organization will implement specific interventions, and how it will use measurement to assess the impact its programs have on people's lives.

2. Follow the 80/20 rule.
By studying their organization's program learning agendas, NGO leaders can determine the highest priority questions that need answers. This exercise can serve as a powerful lens through which to make measurement resource allocation decisions.

In our work with multiservice NGOs, we've found that generally 20 percent of an organization’s programs create 80 percent of the desired impact. When it comes to building evidence-based programs, most effective organizations devote a good proportion of their measurement resources to this 20 percent, which allows them to identify scalable programs that address high-priority questions.

The International Rescue Committee (IRC), a $387 million global NGO that focuses on emergency relief and post-conflict development in 40 countries, follows this strategy. IRC develops learning priorities in each of its sectors by carefully considering where existing evidence is insufficient to answer an important question. It then looks for opportunities to implement projects and evaluations to address those top learning priorities, sequencing research investments and differentially dedicating resources to the 20 percent that will lead to scalable impact.

3. Aim for common output and outcome indicators.
Global NGO leaders constantly confront the question of how much to standardize measurement across their programs and sites. They know that "letting a thousand flowers bloom" decreases their ability to understand what works and why. But they also understand that too much standardization risks overlooking important contextual differences and quashing the entrepreneurialism of managers running their divisions. Moreover, local managers have to respond to donor requirements for specialized reporting. One way to address this issue is for headquarters and local leaders to agree on a small number of common output and outcome indicators that all sites (or countries) will collect on a given program. They can feed common indicators into a learning system that facilitates rapid program improvement. Meanwhile, this approach allows local leaders to choose site-specific indicators of importance to them.

Goldman Sachs's 10,000 Women initiative struck this balance effectively in its five-year, $100 million global program to support women-owned small- and mid-size enterprises. The initiative over time arrived at a common set of output and outcome indicators in the areas of improved business knowledge, practices, and performance. The common indicators flowed into a learning system that included real-time data analysis and best-practice sharing, enabling better collective decision-making around issues such as selection criteria for the businesses and how best to help participants access capital. At the same time, management teams at local sites were able to measure local priorities, which enabled better local decision-making.

4. Segment, and start small.
Multiservice NGOs attempting to raise their measurement game often are tempted to try building rigorous measurement systems across all of their programs and sites at once. In our experience, this approach inevitably falls under its own weight because resources are limited and organizational change is hard. A staged process that sequences improved measurement across programs and sites increases the odds of success.

Right To Play, a $35 million global NGO that uses sports and play to educate and empower young people, recently confronted this dilemma as it began a multiyear effort to further enhance its measurement capabilities. Instead of taking a one-size-fits-all approach, Right To Play leadership sought to determine the programs and countries where measurement would reap the greatest return for the organization. For instance, it assessed countries on factors such as existing buy-in to measurement, strength of the measurement staff, and feasibility of conducting rigorous measurement given contextual factors. It assessed programs on factors such as strength of the pre-existing evidence base, and perceived scalability and fundability should deeper evidence build. The result was a clear roadmap to determine how to sequence measurement investment. Right To Play hopes that investing disproportionately in a small number of its sites and programs will generate quick wins that entice other sites to embrace deeper measurement investment.

 

Nabihah Kara is an associate consultant with The Bridgespan Group’s Boston office. Prior to joining Bridgespan, Nabihah worked for a variety of international development organizations and networks, implementing health programs in Central and South Asia.

Matthew Forti is director of the One Acre Fund USA and a former Bridgespan manager.


Creative Commons License logo
This work is licensed under a Creative Commons Attribution 4.0 International License. Permissions beyond the scope of this license are available in our Terms and Conditions.