Making Measurement Work in Large, Complex Organizations

This blog post originally appeared January 16, 2014, on the Stanford Social Innovation Review website.

By Nabihah Kara and Matt Forti

It's one thing for large social-sector organizations to embrace the idea of measurement as a way to enhance program impact. It’s entirely another for them to figure out how to design and implement measurement systems for multisite, multiservice, and even global organizations. One NGO leader we spoke with described the experience as "wading through the measurement mess."

While there's no easy fix for this "measurement mess," multiservice global NGOs we've studied and advised have forged ways to cut through the complexity—here's how:

1. Set clear learning agendas.
Measurement without purpose is like a car without wheels—a frame that will never reach a destination. To guide effective decision-making, it must be clear what decisions the data collection process will help inform and what measurement approach will provide the best data. In the case of multiservice NGOs, each program or sector needs such a learning agenda. The top priorities of each sector will shape the organization-wide learning agenda that leadership should focus on.

The Aga Khan Development Network (AKDN), a nondenominational organization working across 12 development sectors in 30 countries, recently developed a learning agenda around understanding holistic quality of life for its beneficiaries. Since AKDN is often the sole provider of economic, social, and cultural services in the regions where it works, its quality of life assessments provide critical insight into beneficiary needs and aspirations. AKDN assembles these insights into a learning agenda for implementing intervention strategies across its network of programs. The result is alignment around how and why the organization will implement specific interventions, and how it will use measurement to assess the impact its programs have on people's lives.

2. Follow the 80/20 rule.
By studying their organization's program learning agendas, NGO leaders can determine the highest priority questions that need answers. This exercise can serve as a powerful lens through which to make measurement resource allocation decisions.

In our work with multiservice NGOs, we've found that generally 20 percent of an organization’s programs create 80 percent of the desired impact. When it comes to building evidence-based programs, most effective organizations devote a good proportion of their measurement resources to this 20 percent, which allows them to identify scalable programs that address high-priority questions.

The International Rescue Committee (IRC), a $387 million global NGO that focuses on emergency relief and post-conflict development in 40 countries, follows this strategy. IRC develops learning priorities in each of its sectors by carefully considering where existing evidence is insufficient to answer an important question. It then looks for opportunities to implement projects and evaluations to address those top learning priorities, sequencing research investments and differentially dedicating resources to the 20 percent that will lead to scalable impact.

3. Aim for common output and outcome indicators.
Global NGO leaders constantly confront the question of how much to standardize measurement across their programs and sites. They know that "letting a thousand flowers bloom" decreases their ability to understand what works and why. But they also understand that too much standardization risks overlooking important contextual differences and quashing the entrepreneurialism of managers running their divisions. Moreover, local managers have to respond to donor requirements for specialized reporting. One way to address this issue is for headquarters and local leaders to agree on a small number of common output and outcome indicators that all sites (or countries) will collect on a given program. They can feed common indicators into a learning system that facilitates rapid program improvement. Meanwhile, this approach allows local leaders to choose site-specific indicators of importance to them.

Goldman Sachs's 10,000 Women initiative struck this balance effectively in its five-year, $100 million global program to support women-owned small- and mid-size enterprises. The initiative over time arrived at a common set of output and outcome indicators in the areas of improved business knowledge, practices, and performance. The common indicators flowed into a learning system that included real-time data analysis and best-practice sharing, enabling better collective decision-making around issues such as selection criteria for the businesses and how best to help participants access capital. At the same time, management teams at local sites were able to measure local priorities, which enabled better local decision-making.

4. Segment, and start small.
Multiservice NGOs attempting to raise their measurement game often are tempted to try building rigorous measurement systems across all of their programs and sites at once. In our experience, this approach inevitably falls under its own weight because resources are limited and organizational change is hard. A staged process that sequences improved measurement across programs and sites increases the odds of success.

Right To Play, a $35 million global NGO that uses sports and play to educate and empower young people, recently confronted this dilemma as it began a multiyear effort to further enhance its measurement capabilities. Instead of taking a one-size-fits-all approach, Right To Play leadership sought to determine the programs and countries where measurement would reap the greatest return for the organization. For instance, it assessed countries on factors such as existing buy-in to measurement, strength of the measurement staff, and feasibility of conducting rigorous measurement given contextual factors. It assessed programs on factors such as strength of the pre-existing evidence base, and perceived scalability and fundability should deeper evidence build. The result was a clear roadmap to determine how to sequence measurement investment. Right To Play hopes that investing disproportionately in a small number of its sites and programs will generate quick wins that entice other sites to embrace deeper measurement investment.


Nabihah Kara is an associate consultant with The Bridgespan Group’s Boston office. Prior to joining Bridgespan, Nabihah worked for a variety of international development organizations and networks, implementing health programs in Central and South Asia.

Matthew Forti is director of the One Acre Fund USA and a former Bridgespan manager.

Five Ways Funders Can Replicate What Works

This blog post originally appeared on the Stanford Social Innovation Review website on December 16, 2013.

By Laura Burkhauser

There’s no question that innovation is a sexier funding opportunity than implementation. Which would you choose to support: funding the discovery of the light bulb or funding Inspector 48 in the lightbulb factory?

And yet effective implementation is critical to the social sector’s growing evidence-based movement. As researcher Dean Fixsen has noted, if evidence-based programs (EBPs) are like a life-saving serum, then implementation is like a syringe: You need both to work to see results. Hence, a program proven to work in one place won’t produce the same results in another if it is implemented poorly.

One solution to this problem is for funders to play a stronger role in the implementation process. If funders create both a higher demand for effective implementation and a stronger set of supports, they stand a far better chance of scaling EBPs while maintaining quality. A federally funded teen pregnancy prevention program shows promise as a model for how to do this.

In September 2010, the federal government awarded $75 million in competitive, five-year grants to 75 nonprofit and public agencies, in 37 states and the District of Columbia, to implement the Teen Pregnancy Prevention program. We surveyed these grantees (and heard back from a third), and then interviewed a dozen of them as well half a dozen technical assistance providers. We also spoke with the federal officials sponsoring the program at the Office of Adolescent Health (OAH). By the end of our research, we came to believe that the Teen Pregnancy Prevention program is a model with real potential for success. And we identified five key elements needed to support effective local implementation of EBPs, whether in teen pregnancy or other areas.

1. Choose grantees willing to focus on implementation.
Evidence-based programs, unlike many other programs, require diligent fidelity to a prescribed model. This can be a difficult cultural shift for practitioners who are used to having a lot of freedom in how they interact with their clients. "Some clinicians are like artists," says Doug Kopp, CEO of Functional Family Therapy LLC. "They don’t want you to mess with their creative process."

OAH made it clear when it issued the pregnancy prevention grants that it would require strict fidelity monitoring. Some grantees adjusted fairly easily, but others had to almost entirely switch out their practitioner workforce. "There was a joke in the agency that I was the hatchet around here because staff were coming and going," says Lily Rivera, the Teen Pregnancy Prevention program lead at La Alianza Hispana. "But we had to get the right staff."

2. Choose EBPs with implementation support services built in.
It's important to evaluate not only the quality of the evidence behind an EBP, but also the quality of support services. For some EBPs, supports aren’t readily available. Others have a crack team of technical assistance experts who have designed training, coaching, and shared data systems for all organizations implementing the program. Funders interested in underwriting implementation should prioritize scaling programs with strong supports already built in. They may also benefit from forging relationships with the developers of the EBPs they wish to scale.

3. Anchor success in a close partnership.
OAH identified a project lead at each grantee site and matched that person with a program officer. While one function of assigning clear roles is increased accountability, it is critical that the project lead isn’t managed as a risk, but supported as a partner. "The OAH project officers are not just monitoring grantees for compliance," said Amy Margolis, director of the Division of Program Development and Operations at OAH. "They are helping the grantees continuously enhance their programs."

Linda Rogers, project director with the Iredell-Statesville School District in North Carolina, one of several school district grantees, told us that the "level of support we get from OAH has been incredible." Many of the grantees we interviewed concurred.

4. Plan for learning.
The Teen Pregnancy Prevention program provided grantees with a year to assess needs, select programs, plan, hire staff, participate in trainings, pilot the intervention, and troubleshoot problems that showed up in the pilot. OAH was therefore making a trade-off between quantity—foregoing many tens of thousands of young people who could have been reached in that year—and the quality of the interventions over the full five-year grant period.

Claire Wyneken of the Wyman National Network emphasized the importance of the piloting phase. "If you're new to an EBP, you have to do a pilot—especially piloting actual implementation," she said. "Get staff acclimated to the program and all the logistics related to it. Work through local considerations, partner buy-in, and any bugs in deploying the program. Just because something is an EBP, you can’t just open a box and go."

5. Fund amply—you get what you pay for.
Leading field research indicates that organizations implementing EBPs typically need three types of funding: start-up or planning money as discussed above; infrastructure funding to provide training and coaching to frontline staff, and to measure implementation; and direct-service funding to actually administer the program as described by the developer.

Many funders inflexibly favor this last type of funding, and even then, cover only an odd assortment of component program parts rather than the entirety of core programmatic components that the data shows must be present for successful implementation. As a result, nonprofits must haphazardly cobble together enough funding to deliver an evidence-based program and cannot do so in a sustainable—much less scalable—fashion. OAH alleviated the funding scramble by structuring its grants to include all three types of financial support.

For those concerned with eventually bringing down the cost of an evidence-based program, we hear you. It is very likely that EBPs will eventually need to travel down the cost curve if they are ever to scale. But funders who believe in evidence must structure grants to experiment with lowering cost per impact while measuring at each step along the way to ensure that impact is still happening.

For the "what works" movement to succeed, funders need to give implementation a second look. If "sexy" is in the eye of the beholder, perhaps there is nothing sexier than an evidence-based program achieving its promised impact.

Laura Burkhauser is consultant in The Bridgespan Group’s San Francisco office.

Impact, Not Overhead, Is What Counts

This blog post originally appeared on the Stanford Social Innovation Review website on November 13, 2013.

By Christina Triantaphyllis and Matt Forti

Americans spent $39 billion in private philanthropy on the developing world in 2010, and the United States remains the highest net donor of aid. Many of these dollars flow through billion-dollar global nongovernmental organizations (NGOs). But how effective is this spending? Too often, donors rely on readily available metrics, such as the percentage of dollars spent on overhead vs. programs, instead of considering true measures of impact and cost effectiveness. The widely held belief that low overhead indicates greater effectiveness runs deep in the social sector and has been well documented in articles such as Stanford Social Innovation Review’s “The Nonprofit Starvation Cycle” and Bridgespan’s “Stop Starving Scale,” and in founder and president of Charity Defense Council Dan Pallotta’s TED talk (which generated both praise and censure). But fixing the problem is far easier said than done.

Increased awareness has not necessarily translated into changed behavior for global NGOs. “The general climate in the United States is that the sole criterion for evaluating a charitable gift is overhead,” said Richard Stearns, CEO of World Vision US. “The right question to ask is, ‘What impact is the organization having per donated dollar?’ When we ask the wrong question [about overhead], we punish the organization that’s investing enough [in administration] to have real impact.” (A recent New York Times article on “How to Choose a Charity Wisely” demonstrates persistent challenges in shifting the conversation to metrics that matter.)

What stands in the way of philanthropists, nonprofits, and other stakeholders asking and answering the right question? In 2012, The Bridgespan Group surveyed two-dozen leaders of US-based global NGOs with budgets exceeding $100 million, all operating in 10 countries or more. The results revealed numerous barriers to obtaining true estimates of both cost and impact:

Fragmented measurement and evaluation (M&E) systems do not permit cross-country comparisons of programs and sites. Only 32 percent of global NGOs surveyed reported having uniform metrics or logic models to guide programs. Many cite a desire to do so without a clear funding source. One NGO leader was frustrated with funds that “do not cover longitudinal, cross-country M&E systems that would allow us to compare results in Liberia with those in Nicaragua or Costa Rica.”

Organizations under-invest in M&E activities that would enable impact-per-dollar program measurement. NGOs reported spending one-third as much on M&E as the 5 to 10 percent recommended by experts. In addition, nearly half of NGOs were unable to report total measurement costs, either because staff time spent on measurement was not documented or because costs were rolled into other cost categories.

Sufficient investment in M&E activities goes beyond randomized control trials to prove isolated outcomes. It also takes systematic use of cost and impact data to inform organizational strategy for scaling effective and efficient programs. “It is so much easier to measure our financial rather than our social bottom line,” explained an NGO leader. “Until we can figure that out, we will be stuck with financial measures of effectiveness.”

A lack of unified financial systems hinders real-time, true-cost data. NGOs rely on the generosity of funders who, for the most part, restrict their investments to specific programs, resulting in a patchwork of fragmented, short-term engagements across countries and continents. Thus, NGOs do not understand all costs associated with delivering impact, and instead focus on program or country vs. headquarters expenses.

Only 36 percent of global NGOs surveyed reported having a unified financial system across all countries, programs, and offices that would allow efficient reporting of true costs.

“We struggle with using several different financial software packages for different projects, and significant (and as yet unmet) financial resources will be needed to create a unified system,” said another NGO leader.

Cost-per-impact is certainly not for all organizations, especially those focused on advocacy, disaster relief, or holistic community development, where it is extremely difficult to summarize impact by a single outcome aligned against relevant costs. For others, there are a number of ways to shift the conversation to focus on solutions:

  1. Funders should be true partners and reward performance based on a cost-per-impact metric. As one NGO leader put it, “No one has made the case that you need strong measurement and financial systems to get to impact and cost per result. Everyone wants to get through with Band-Aids and chewing gum.” Global NGOs need both patient funders and value-added collaborators in the journey toward cost-per-impact measures.
  2. Board members and other stakeholders should serve as champions for these efforts, pushing for longer-term investments. Global NGO leaders pointed to the pervasive cost-cutting mentality that many business-minded board members bring. “The knee-jerk reaction is to point to a well in Bangladesh that will not be built or to the number of children not vaccinated instead of considering what will help us determine how to serve 100 times that many children in the future," said one leader. Another said, “It took opening up the hood and showing the dirt … showing the board that we have five ledger systems and too many people working on this.”
  3. Rankings and ratings systems need to emphasize measurement of effectiveness. While ratings systems such as Charity Navigator and GuideStar continue to focus on overhead rates, accountability, and transparency, newer sites such as GiveWell and 3ie are putting cost-per-impact on the map in global development—a positive trend that needs support from NGOs, funders, and the public to change the system.

We have seen a rise in media attention for organizations like the Against Malaria Foundation that can capture cost-per-impact and cost-effectiveness metrics. But we have a long way to go—major challenges include coming up with uniform ways of measuring cost and outcomes to arrive at comparable measures. NGOs with multiple programs on several continents face a tremendous amount of complexity in developing these uniform measures and implementing systems that efficiently collect and synthesize data.

But with the support of funders and board members, NGOs could start by focusing on a few core programs in a subset of countries where measuring cost-per-impact may be more manageable, and then building on the knowledge gained from early experiences. We also need a coordinated, supportive movement toward ratings that are based on results for beneficiaries. All this will take honest dialogue between NGOs and their funders to move the conversation to cost-per-impact.

Christina Triantaphyllis (@cltrianta) is a consultant in Bridgespan’s Boston office.

Matthew Forti is director, One Acre Fund USA, where he coordinates all US functions and oversees performance measurement for One Acre Fund, a nonprofit that assists over 135,000 smallholder farming families in East Africa to double their farm profits and eliminate persistent hunger. Matt is also advisor to the Performance Measurement Capability Area at the Bridgespan Group (@BridgespanGroup), an advisory firm to mission-driven leaders and organizations that supports the design of performance measurement systems for continuous learning and improvement.

Beyond Input—What Happens When Nonprofits Really Engage The People They Serve?

This blog originally appeared on Huff Post Impact on September 3, 2013

By Willa Seldon
 
At a college I visited recently, I read a poster where a student had described her "superpower", i.e., her strongest capability, as listening. What if that were broadly true of the social sector?
Nearly all nonprofits collect input and other kinds of data from those they serve--at a minimum, demographic information and satisfaction surveys. But some leading nonprofits are engaging clients or beneficiaries or residents--here we use the term constituents--in order to have more impact on the social concerns they are trying to address. Of course, some organizations are constituent-led--parent groups, neighborhood associations, civil rights organizations, membership groups.
 
Yet for many social sector organizations, constituent engagement is a challenge, as listening is a challenge for many people. While most of us understand that it makes sense to find out what people think, it is often unclear what the best strategies are for eliciting useful and timely input, much less how to take action based on it. And few can imagine involving constituents in deeper ways, like developing programs or giving constituents control of resources.
 
Consider Friendship Public Charter School, a $72 million charter management organization that runs six charter and five turnaround schools in Washington, D.C. and Baltimore. Like many charter organizations, Friendship initially drew upon research-based strategies to increase student achievement: longer school days, double doses of math and reading, team teaching. But by 2006, student achievement gains had flatlined, prompting Chief Operating Officer Patricia Brantley to search for new ways to get better results. Friendship ultimately decided to engage its primary constituents--students and parents--as well as teachers, to co-develop a new approach to performance management that would help them keep improving.
 
Friendship gathered input from its constituents to create a list of leading indicators that drive student achievement and then built a system for teachers to see this data in real-time. But, said Brantley, to "truly enable breakaway performance required making the data useful for students and parents." So teachers posted simple scorecards of classroom performance, including measures such as attendance and discipline, and offered incentives that motivated students to work together to improve.
 
And they taught students how to track their own data. "We expect students as young as kindergartners to be able to explain and provide evidence of their progress to their teachers, their peers, and their parents," Brantley said. Younger students might affix stars onto a paper as they learn each of five vocabulary words that are their goal for the week. Older students use indicators, such as mastery of core subjects, to figure out if they're ultimately on track for college completion. They are taught to use the data to set more ambitious goals for themselves and share progress at parent-teacher "Data Nights."
 
By involving students in tracking their own progress and setting their own goals, Friendship engages students and parents as partners in the design and implementation of the performance management system. Achievement at Friendship schools is moving upward again--90 percent of the schools have seen sustained gains in reading scores, math scores, and attendance--and Brantley credits these efforts as the key driver.
 
This kind of "co-creation" can go further, reaching into the design and delivery of programs, advocacy, and governance. Consider the example of YouthBuild, a network of 273 independent affiliates coordinated by YouthBuild USA that have worked with over 100,000 unemployed, low-income 16 to 24 year olds to build affordable housing, advance their education, and help them become community leaders. YouthBuild was founded upon the philosophy that young people must play a role in solving the challenges they and their communities face.
 
But how does an organization as big as YouthBuild USA turn this lofty ideal into true co-creation? For one thing, its youth participants help run the local organizations. Youth Policy Councils, with members elected by their peers, meet weekly with each site's leader to discuss issues such as program policies and staff hiring. Lots of direct service organizations have constituent advisory groups, but YouthBuild's Youth Policy Councils have actual power. For instance, they might choose among three candidates put forward for a senior staff vacancy or decide how additional funds should be spent. YouthBuild USA President Dorothy Stoneman recalled that, "I once made the mistake of not listening to the young people in a hiring decision, and it turned out they were right. The person was asked to leave the organization within six months. That was the last time I disregarded the voice of our constituents."
 
Reflecting on the role of constituents in an organization committed to social change, Stoneman said, "As a white participant in the civil rights movement in the sixties, I learned the importance of listening and holding myself accountable to the local community. I think constituent engagement is the big overlooked essential that today's generation of social innovators haven't been taught, haven't learned through actions, and haven't been pushed to consider by constituents, funders, and consultants."
 
The kind of engagement that Friendship Public Charter School and YouthBuild have with their constituents takes practice, persistence, a willingness to learn, and a recognition that constituent perspectives are not a panacea. (My colleague Matthew Forti and I provide additional examples in “From Input to Ownership: How Nonprofits Can Engage with the People They Serve to Carry Out Their Missions.”) As more organizations demonstrate effective strategies for constituent engagement, my hope is that the social sector will learn more about what approaches work best, and how engagement can better be integrated with evidence-based practices and programs.

Measuring to Improve vs. Improving Measurement

This blog post originally appeared on the Stanford Social Innovation Review website.

Measurement was once again a hot topic at this year’s Skoll Forum; with seven measurement-related sessions over three days, it eclipsed other perennially popular topics like funding and innovation. And yet there was a marked difference in the discourse this year, with many speakers and attendees questioning whether social sector organizations are thinking too narrowly about the whole paradigm of measurement. Put another way, there seemed a real tension between whether the greatest bang for the buck in measurement will come from organizations measuring for their own improvement, or from the social sector improving on the measurement tools and techniques available to organizations in the first place.

A session I co-led, “Measuring to Improve (and Not Just to Prove),” fell decidedly in the first camp. With most social sector organizations under-resourcing and under-prioritizing measurement, the session argued that organizations get the best return when they: a) collect a small number of easily verifiable measures linked to their theories of change, b) do this regularly at every level, and c) couple data collection with analysis, learning, and action. The session used One Acre Fund, an NGO that boosts incomes of smallholder farmers in East Africa (and where I’m the founding board chair), as an example. At the lowest level, field officers, who work directly with farmers, collect and work in groups to analyze data each week on farmer repayments, farmer attendance at trainings, and farmer adoption of One Acre Fund techniques. Middle managers are trained to look at aggregate data around these measures and quickly take action to fix anomalies. And at the highest level, leadership focuses on simple organizational measures, such as average increase in farmer income and farm income generated per donor dollar invested, rather than every possible outcome.

Other Skoll sessions and content drove home a similar view. Caroline Fiennes, director of Giving Evidence, talked about the “operational uselessness” of collecting impact data solely on your organization’s current model, without comparison to other approaches you or others are utilizing or testing that might deliver better results, lower costs, or both. Ehren Reed, Skoll Foundation’s research and evaluation officer, argued that the most successful social entrepreneurs are constantly tweaking their business models by scanning their environments and internalizing the implications for their strategies. One social enterprise leader perhaps put it best when she noted, “We decided that if we couldn’t name a meaningful action we would take as a result, we would stop collecting the data.”

On the other hand, several Skoll sessions were devoted to new measurement tools and techniques that could theoretically propel a giant leap forward in the social sector’s use of data. Big data, for one, arose time and again, with proponents arguing for its ability to turbo-charge social sector impact much in the same way that it has turbo-charged profits for the Facebook’s and Amazon.com’s of the corporate world. While presenters shared several promising examples, including Global Giving’s Storytelling Tools and Benetech’s new Bookshare Web Reader, there seemed a dangerous extrapolation from these examples to a prevailing belief that “big data” would plug the “big gap” in social impact potential across the sector.

Similarly, funding vehicles such as impact investing and social impact bonds were highlighted extensively as new tools meant to accelerate impact in the social sector. And yet the data suggests that both are struggling to gain traction, given the small number of interventions that can absorb these funding types.

At the end of the day, the usefulness of any measurement tool depends on whether it is the best at addressing a high-priority question that a decision-maker at any level of an organization is seeking to answer. Most social sector organizations are still struggling to answer basic questions about their program models: Do a high proportion of the clients they reach meet the organization’s own selection criteria? Do clients that participate more realize higher levels of outcomes? Does the organization’s model produce greater impact per dollar than the other models available for their target clients? These basic questions require basic measurement tools, coupled with a much greater leadership commitment to—and a culture that embraces—data-driven decision-making. For this reason, newer tools like big data may be more of a big distraction than a big opportunity for the typical social sector organization.

What is your experience applying these new kinds of measurement tools and approaches to your organization? Has it worked, and if so, why?
 
Matthew Forti is director, One Acre Fund USA, where he coordinates all US functions and oversees performance measurement for One Acre Fund, a nonprofit that assists over 135,000 smallholder farming families in East Africa to double their farm profits and eliminate persistent hunger. Matt is also advisor to the Performance Measurement Capability Area at The Bridgespan Group (@BridgespanGroup), an advisory firm to mission-driven leaders and organizations that supports the design of performance measurement systems for continuous learning and improvement.