This blog post originally appeared on the Stanford Social Innovation Review website on November 13, 2013.
By Christina Triantaphyllis and Matt Forti
Americans spent $39 billion in private philanthropy on the developing world in 2010, and the United States remains the highest net donor of aid. Many of these dollars flow through billion-dollar global nongovernmental organizations (NGOs). But how effective is this spending? Too often, donors rely on readily available metrics, such as the percentage of dollars spent on overhead vs. programs, instead of considering true measures of impact and cost effectiveness. The widely held belief that low overhead indicates greater effectiveness runs deep in the social sector and has been well documented in articles such as Stanford Social Innovation Review’s “The Nonprofit Starvation Cycle” and Bridgespan’s “Stop Starving Scale,” and in founder and president of Charity Defense Council Dan Pallotta’s TED talk (which generated both praise and censure). But fixing the problem is far easier said than done.
Increased awareness has not necessarily translated into changed behavior for global NGOs. “The general climate in the United States is that the sole criterion for evaluating a charitable gift is overhead,” said Richard Stearns, CEO of World Vision US. “The right question to ask is, ‘What impact is the organization having per donated dollar?’ When we ask the wrong question [about overhead], we punish the organization that’s investing enough [in administration] to have real impact.” (A recent New York Times article on “How to Choose a Charity Wisely” demonstrates persistent challenges in shifting the conversation to metrics that matter.)
What stands in the way of philanthropists, nonprofits, and other stakeholders asking and answering the right question? In 2012, The Bridgespan Group surveyed two-dozen leaders of US-based global NGOs with budgets exceeding $100 million, all operating in 10 countries or more. The results revealed numerous barriers to obtaining true estimates of both cost and impact:
Fragmented measurement and evaluation (M&E) systems do not permit cross-country comparisons of programs and sites. Only 32 percent of global NGOs surveyed reported having uniform metrics or logic models to guide programs. Many cite a desire to do so without a clear funding source. One NGO leader was frustrated with funds that “do not cover longitudinal, cross-country M&E systems that would allow us to compare results in Liberia with those in Nicaragua or Costa Rica.”
Organizations under-invest in M&E activities that would enable impact-per-dollar program measurement. NGOs reported spending one-third as much on M&E as the 5 to 10 percent recommended by experts. In addition, nearly half of NGOs were unable to report total measurement costs, either because staff time spent on measurement was not documented or because costs were rolled into other cost categories.
Sufficient investment in M&E activities goes beyond randomized control trials to prove isolated outcomes. It also takes systematic use of cost and impact data to inform organizational strategy for scaling effective and efficient programs. “It is so much easier to measure our financial rather than our social bottom line,” explained an NGO leader. “Until we can figure that out, we will be stuck with financial measures of effectiveness.”
A lack of unified financial systems hinders real-time, true-cost data. NGOs rely on the generosity of funders who, for the most part, restrict their investments to specific programs, resulting in a patchwork of fragmented, short-term engagements across countries and continents. Thus, NGOs do not understand all costs associated with delivering impact, and instead focus on program or country vs. headquarters expenses.
Only 36 percent of global NGOs surveyed reported having a unified financial system across all countries, programs, and offices that would allow efficient reporting of true costs.
“We struggle with using several different financial software packages for different projects, and significant (and as yet unmet) financial resources will be needed to create a unified system,” said another NGO leader.
Cost-per-impact is certainly not for all organizations, especially those focused on advocacy, disaster relief, or holistic community development, where it is extremely difficult to summarize impact by a single outcome aligned against relevant costs. For others, there are a number of ways to shift the conversation to focus on solutions:
Funders should be true partners and reward performance based on a cost-per-impact metric. As one NGO leader put it, “No one has made the case that you need strong measurement and financial systems to get to impact and cost per result. Everyone wants to get through with Band-Aids and chewing gum.” Global NGOs need both patient funders and value-added collaborators in the journey toward cost-per-impact measures.
Board members and other stakeholders should serve as champions for these efforts, pushing for longer-term investments. Global NGO leaders pointed to the pervasive cost-cutting mentality that many business-minded board members bring. “The knee-jerk reaction is to point to a well in Bangladesh that will not be built or to the number of children not vaccinated instead of considering what will help us determine how to serve 100 times that many children in the future," said one leader. Another said, “It took opening up the hood and showing the dirt … showing the board that we have five ledger systems and too many people working on this.”
Rankings and ratings systems need to emphasize measurement of effectiveness. While ratings systems such as Charity Navigator and GuideStar continue to focus on overhead rates, accountability, and transparency, newer sites such as GiveWell and 3ie are putting cost-per-impact on the map in global development—a positive trend that needs support from NGOs, funders, and the public to change the system.
We have seen a rise in media attention for organizations like the Against Malaria Foundation that can capture cost-per-impact and cost-effectiveness metrics. But we have a long way to go—major challenges include coming up with uniform ways of measuring cost and outcomes to arrive at comparable measures. NGOs with multiple programs on several continents face a tremendous amount of complexity in developing these uniform measures and implementing systems that efficiently collect and synthesize data.
But with the support of funders and board members, NGOs could start by focusing on a few core programs in a subset of countries where measuring cost-per-impact may be more manageable, and then building on the knowledge gained from early experiences. We also need a coordinated, supportive movement toward ratings that are based on results for beneficiaries. All this will take honest dialogue between NGOs and their funders to move the conversation to cost-per-impact.
Christina Triantaphyllis (@cltrianta) is a consultant in Bridgespan’s Boston office.
Matthew Forti is director, One Acre Fund USA, where he coordinates all US functions and oversees performance measurement for One Acre Fund, a nonprofit that assists over 135,000 smallholder farming families in East Africa to double their farm profits and eliminate persistent hunger. Matt is also advisor to the Performance Measurement Capability Area at the Bridgespan Group (@BridgespanGroup), an advisory firm to mission-driven leaders and organizations that supports the design of performance measurement systems for continuous learning and improvement.
Posted: 11/20/2013 2:55:05 PM by Carole Matthews with
This blog originally appeared on Huff Post Impact on September 3, 2013
By Willa Seldon
At a college I visited recently, I read a poster where a student had described her "superpower", i.e., her strongest capability, as listening. What if that were broadly true of the social sector?
Nearly all nonprofits collect input and other kinds of data from those they serve--at a minimum, demographic information and satisfaction surveys. But some leading nonprofits are engaging clients or beneficiaries or residents--here we use the term constituents--in order to have more impact on the social concerns they are trying to address. Of course, some organizations are constituent-led--parent groups, neighborhood associations, civil rights organizations, membership groups.
Yet for many social sector organizations, constituent engagement is a challenge, as listening is a challenge for many people. While most of us understand that it makes sense to find out what people think, it is often unclear what the best strategies are for eliciting useful and timely input, much less how to take action based on it. And few can imagine involving constituents in deeper ways, like developing programs or giving constituents control of resources.
Consider Friendship Public Charter School, a $72 million charter management organization that runs six charter and five turnaround schools in Washington, D.C. and Baltimore. Like many charter organizations, Friendship initially drew upon research-based strategies to increase student achievement: longer school days, double doses of math and reading, team teaching. But by 2006, student achievement gains had flatlined, prompting Chief Operating Officer Patricia Brantley to search for new ways to get better results. Friendship ultimately decided to engage its primary constituents--students and parents--as well as teachers, to co-develop a new approach to performance management that would help them keep improving.
Friendship gathered input from its constituents to create a list of leading indicators that drive student achievement and then built a system for teachers to see this data in real-time. But, said Brantley, to "truly enable breakaway performance required making the data useful for students and parents." So teachers posted simple scorecards of classroom performance, including measures such as attendance and discipline, and offered incentives that motivated students to work together to improve.
And they taught students how to track their own data. "We expect students as young as kindergartners to be able to explain and provide evidence of their progress to their teachers, their peers, and their parents," Brantley said. Younger students might affix stars onto a paper as they learn each of five vocabulary words that are their goal for the week. Older students use indicators, such as mastery of core subjects, to figure out if they're ultimately on track for college completion. They are taught to use the data to set more ambitious goals for themselves and share progress at parent-teacher "Data Nights."
By involving students in tracking their own progress and setting their own goals, Friendship engages students and parents as partners in the design and implementation of the performance management system. Achievement at Friendship schools is moving upward again--90 percent of the schools have seen sustained gains in reading scores, math scores, and attendance--and Brantley credits these efforts as the key driver.
This kind of "co-creation" can go further, reaching into the design and delivery of programs, advocacy, and governance. Consider the example of YouthBuild, a network of 273 independent affiliates coordinated by YouthBuild USA that have worked with over 100,000 unemployed, low-income 16 to 24 year olds to build affordable housing, advance their education, and help them become community leaders. YouthBuild was founded upon the philosophy that young people must play a role in solving the challenges they and their communities face.
But how does an organization as big as YouthBuild USA turn this lofty ideal into true co-creation? For one thing, its youth participants help run the local organizations. Youth Policy Councils, with members elected by their peers, meet weekly with each site's leader to discuss issues such as program policies and staff hiring. Lots of direct service organizations have constituent advisory groups, but YouthBuild's Youth Policy Councils have actual power. For instance, they might choose among three candidates put forward for a senior staff vacancy or decide how additional funds should be spent. YouthBuild USA President Dorothy Stoneman recalled that, "I once made the mistake of not listening to the young people in a hiring decision, and it turned out they were right. The person was asked to leave the organization within six months. That was the last time I disregarded the voice of our constituents."
Reflecting on the role of constituents in an organization committed to social change, Stoneman said, "As a white participant in the civil rights movement in the sixties, I learned the importance of listening and holding myself accountable to the local community. I think constituent engagement is the big overlooked essential that today's generation of social innovators haven't been taught, haven't learned through actions, and haven't been pushed to consider by constituents, funders, and consultants."
The kind of engagement that Friendship Public Charter School and YouthBuild have with their constituents takes practice, persistence, a willingness to learn, and a recognition that constituent perspectives are not a panacea. (My colleague Matthew Forti and I provide additional examples in “From Input to Ownership: How Nonprofits Can Engage with the People They Serve to Carry Out Their Missions.”) As more organizations demonstrate effective strategies for constituent engagement, my hope is that the social sector will learn more about what approaches work best, and how engagement can better be integrated with evidence-based practices and programs.
Posted: 9/9/2013 11:01:16 AM by Carole Matthews with
This blog post originally appeared on the Stanford Social Innovation Review website.
Measurement was once again a hot topic at this year’s Skoll Forum; with seven measurement-related sessions over three days, it eclipsed other perennially popular topics like funding and innovation. And yet there was a marked difference in the discourse this year, with many speakers and attendees questioning whether social sector organizations are thinking too narrowly about the whole paradigm of measurement. Put another way, there seemed a real tension between whether the greatest bang for the buck in measurement will come from organizations measuring for their own improvement, or from the social sector improving on the measurement tools and techniques available to organizations in the first place.
A session I co-led, “Measuring to Improve (and Not Just to Prove),” fell decidedly in the first camp. With most social sector organizations under-resourcing and under-prioritizing measurement, the session argued that organizations get the best return when they: a) collect a small number of easily verifiable measures linked to their theories of change, b) do this regularly at every level, and c) couple data collection with analysis, learning, and action. The session used One Acre Fund, an NGO that boosts incomes of smallholder farmers in East Africa (and where I’m the founding board chair), as an example. At the lowest level, field officers, who work directly with farmers, collect and work in groups to analyze data each week on farmer repayments, farmer attendance at trainings, and farmer adoption of One Acre Fund techniques. Middle managers are trained to look at aggregate data around these measures and quickly take action to fix anomalies. And at the highest level, leadership focuses on simple organizational measures, such as average increase in farmer income and farm income generated per donor dollar invested, rather than every possible outcome.
Other Skoll sessions and content drove home a similar view. Caroline Fiennes, director of Giving Evidence, talked about the “operational uselessness” of collecting impact data solely on your organization’s current model, without comparison to other approaches you or others are utilizing or testing that might deliver better results, lower costs, or both. Ehren Reed, Skoll Foundation’s research and evaluation officer, argued that the most successful social entrepreneurs are constantly tweaking their business models by scanning their environments and internalizing the implications for their strategies. One social enterprise leader perhaps put it best when she noted, “We decided that if we couldn’t name a meaningful action we would take as a result, we would stop collecting the data.”
On the other hand, several Skoll sessions were devoted to new measurement tools and techniques that could theoretically propel a giant leap forward in the social sector’s use of data. Big data, for one, arose time and again, with proponents arguing for its ability to turbo-charge social sector impact much in the same way that it has turbo-charged profits for the Facebook’s and Amazon.com’s of the corporate world. While presenters shared several promising examples, including Global Giving’s Storytelling Tools and Benetech’s new Bookshare Web Reader, there seemed a dangerous extrapolation from these examples to a prevailing belief that “big data” would plug the “big gap” in social impact potential across the sector.
Similarly, funding vehicles such as impact investing and social impact bonds were highlighted extensively as new tools meant to accelerate impact in the social sector. And yet the data suggests that both are struggling to gain traction, given the small number of interventions that can absorb these funding types.
At the end of the day, the usefulness of any measurement tool depends on whether it is the best at addressing a high-priority question that a decision-maker at any level of an organization is seeking to answer. Most social sector organizations are still struggling to answer basic questions about their program models: Do a high proportion of the clients they reach meet the organization’s own selection criteria? Do clients that participate more realize higher levels of outcomes? Does the organization’s model produce greater impact per dollar than the other models available for their target clients? These basic questions require basic measurement tools, coupled with a much greater leadership commitment to—and a culture that embraces—data-driven decision-making. For this reason, newer tools like big data may be more of a big distraction than a big opportunity for the typical social sector organization.
What is your experience applying these new kinds of measurement tools and approaches to your organization? Has it worked, and if so, why?
Matthew Forti is director, One Acre Fund USA, where he coordinates all US functions and oversees performance measurement for One Acre Fund, a nonprofit that assists over 135,000 smallholder farming families in East Africa to double their farm profits and eliminate persistent hunger. Matt is also advisor to the Performance Measurement Capability Area at The Bridgespan Group (@BridgespanGroup), an advisory firm to mission-driven leaders and organizations that supports the design of performance measurement systems for continuous learning and improvement.
Posted: 5/22/2013 9:37:40 AM by Matthew Forti with
This is part of a series of reflective posts by regular Stanford Social Innovation Review (SSIR) bloggers in honor of SSIR’s 10th anniversary.
In my December 2012 entry recapping the year in performance measurement, I shared two studies that suggest nonprofits and funders continue to struggle with the kind of measurement that drives continuous improvement. But viewed over a longer time period, I’d argue that the social sector has made some pretty impressive gains in this area.
So in honor of SSIR’s anniversary, let’s raise a glass to celebrate five improvements in performance measurement during the last 10 years:
From overhead to outcomes. Prevailing wisdom used to be that the best way to judge the efficiency of nonprofits was by looking at the proportion of their budgets that they spent on overhead. Today, websites such as GiveWell and Coalition for Evidence-Based Policy are rating nonprofits on the quality of their outcomes. Even Charity Navigator has dramatically changed course—“results reporting” is now a pillar of its rating system.
From ideology to evidence, particularly in global development. In the early 2000s, fierce ideological debates about how to bring developing countries out of poverty were the norm. Then a new breed of economists at the Poverty Action Lab and its sister networks proposed letting the evidence decide. Though the meteoric growth in the use of randomized control trials is controversial, it has helped the largest global development funders make more decisions based on evidence about what works—a shift we are also starting to see in some domains, such as education, in the United States.
From isolated to shared measurement. The past decade has seen renewed interest in partnership and collaboration across agencies and sectors to achieve common goals. Especially exciting is that many of these local collaborations, as well as national networks, are now setting common indicators, reporting through a common data system, and analyzing and learning from data together.
From straightforward to complex interventions. Innovation in measurement used to focus mainly on direct-service interventions with fairly linear logic models (for example, how to rigorously measure outcomes of a summer literacy program for children). But lately there has been an explosion of innovative approaches (such as developmental evaluation, outcome mapping, and policymaker ratings) for measuring what’s happening in more dynamic environments, such as advocacy work, systems change, or neighborhood revitalization. These and other approaches are allowing organizations and initiatives with complex interventions to learn more from their measurement, undergirded by more sophisticated theories of change.
From external evaluation to performance management. In the past, as described by the 2006 SSIR article "Drowning in Data" social sector measurement was dominated by funder requests for endless lists of metrics or proof of program impact from external evaluations. While many nonprofits will tell you these requests haven’t necessarily slowed, there has been a more concerted effort by evaluation firms and other measurement providers to develop techniques and tools that support performance management. For example: the adaptation of the balanced scorecard to the nonprofit sector, the proliferation of performance management data systems tailored to nonprofits, and the launch of the PerformWell website.
And now for you “glass half empty” types, five areas where we need to make much more progress. We need:
Greater focus on long-term outcomes. Nonprofits still get by with measuring only what’s easiest to measure: short-term outcomes such as school attendance or job placement rates. But if nonprofits want to change lives, they (and their funders) should also care about whether their programs are helping people achieve more meaningful goals, such as completing college or making sustained economic gains. If more nonprofits asked the question of whether the people they serve actually end up in a better situation over the long haul, there would perhaps be a greater focus on holistic interventions or collaborations that create solid pathways to those more meaningful outcomes.
Fewer organizations getting big on the basis of limited evidence. A recent study by Veris Consulting found that only 39 percent of nonprofits that are scaling their programs have evaluated the impact of their work in any way; less than a fifth had a third-party outcome or impact evaluation. While having multiple sites can help an organization test and improve its programs, more extensive growth should require rigorous evaluation.
Greater recognition of organizational and contextual factors that drive strong performance. In the rare instances when models are proven highly effective, funders understandably get excited about scaling those models. But most evaluation studies devote little, if any, attention to underlying organizational factors (such as culture and leader characteristics) and contextual factors (such as regulatory climate and the presence of high-capacity partners) that play a role in success. In their absence, organizations or funders often require that replicators follow the original model with full fidelity; yet this precludes important adaptations and improvements that could increase the odds of success.
A bigger role for an organization’s constituents in performance measurement. Nonprofits have a hard enough time implementing a measurement system that works for senior leaders and program staff; they rarely tackle the question of how measurement can work for the individuals, families, and communities they seek to benefit. But nonprofits often do a real disservice to these constituents—and their own success—when they fail to involve their constituents in reflecting on results, setting goals, and deciding how best to achieve those goals.
As much attention given to “performance audits” as financial audits. Though exact requirements vary by state, most nonprofits with revenues greater than $500,000 are required to obtain an annual independent financial audit. Yet only a handful ever get an outside assessment of their self-reported program performance data. Over time, the social sector would benefit from independent social impact analysts to help clarify for funders and nonprofits the extent to which program data is accurate and representative.
As you reflect on the past decade, what other gains or gaps in social sector measurement would you put on the list?
Posted: 4/18/2013 10:33:47 AM by Matthew Forti with
(This blog post originally appeared on the Stanford Social Innovation Review website.)
By Matt Forti and Matt Plummer
The philanthropic sector seems to be changing its tune about failure. While some, like former Hewlett Foundation President Paul Brest, have been encouraging philanthropists to talk about their failures (of grants, initiatives, or entire strategies) for years, only more recently has the sector more widely adopted the view that failure can be something positive, an indicator of a willingness to take risks, experiment, and adapt. A number of recent initiatives demonstrate this new outlook: the Case Foundation’s Be Fearless campaign, the Institute of Brilliant Failures Award for Best Learning Moment in international development, the Admitting Failure online community, and the FailFaire conferences. All of these have launched in just the last three years.
While failure can be an incredibly valuable learning tool, research from the private sector suggests that most organizations don’t take a systematic approach to experimentation, and therefore don’t reap the benefits of failure. In 2011, Bridgespan began a series of blogs based on a decade of close client work with philanthropists called “Does Your Philanthropy Have an Adaptive Strategy?” These blogs chronicled an emerging redefinition of strategy from a static towards a more flexible view of what constitutes success, and a greater willingness to prototype ideas, learn from mistakes, and adapt in light of new information and opportunities. A video series of candid conversations with more than 60 philanthropists, recently released by Bridgespan, echoes this approach and provides five insights into how to diagnose, learn from, and improve after failures.
Start with clear definition of success. In the videos, Paul Brest notes: “You can’t know whether you’re succeeding or failing unless you’re pretty clear about what outcomes you’re achieving.” Philanthropists can be especially challenged in being clear about outcomes, since they must typically consider outcomes at multiple levels: what their grantees are achieving for the people they serve, how the capacity of the grantee organizations themselves may be increasing, and whether the philanthropist and grantees are collectively achieving a broader set of outcomes for populations or systems. In another short video, Michael Steinhardt shares a great example of how the initiative he co-founded, Taglit-Birthright Israel, defined success and failure upfront so that he could know if his investment was actually making a difference.
Proven results: Michael Steinhardt gives Jewish kids the experience of a lifetime.
Measure along the way to learn and adapt. Since even the best strategies are based on an imperfect understanding of future conditions, plans and initiatives need to be regularly evaluated against new information. When the Robert Wood Johnson Foundation (RWJF) first got involved in end-of-life care, it funded a study to test whether an intervention it planned to support would result in the outcomes it desired. According to RWJF President Risa Lavizzo-Mourey, the study revealed that, “What we thought was going to happen absolutely didn’t happen,” and this allowed RWJF to change course and help advance the movement that changed the way physicians deal with death and dying. Performance measurement systems that behave more like instant feedback mechanisms than long-term evaluation studies alert philanthropies when a strategy is not working as planned, and provide the input to reflect and adapt when necessary.
Risa Lavizzo-Mourey on how early measurement redirected RWJF's strategy.
Resist seeing results as black or white. With increasing efforts to publicize failures, there is a growing pressure to label initiatives and grants as either a success or a failure. However, as President of the Silicon Valley Community Foundation Emmett Carson explains, “The reality is, very few evaluations, under the best of circumstances, are unambiguous ... There’s always some failure and there’s always some success.” Rather than simply plowing ahead with an initiative, or abandoning it, identifying which parts were successes and which were failures can help philanthropists move forward more effectively.
Emmett Carson says evaluation results are not black or white.
Create space for good failures. Just as initiatives can combine success and failure, failures can be either good, bad, or somewhere in between. Many “bad failures” happen because of avoidable errors. Good failures are often the result of taking risks that could lead to transformative change. Inevitably such risks increase the chances of failure, but potentially also the chances of breakthrough success. Pierre Omidyar creates space for these types of failure by empowering each of his teams at the Omidyar Network to spend 5-10 percent of their budgets on “things that aren’t very clear that they’ll have impact.”
Take smart risks: The Omidyars' belief in innovation means every dollar may not have impact.
Talk about failure. When Paul Brest introduced The Worst Grant Contest at the Hewlett Foundation, in which staff nominate and discuss their worst grant of the year, he initially met with great resistance from some of the program staff. But over time, the contest has taken root. The program responsible for the “winning failure” gets a dinner. But the real motivator for staff, says Brest, is “the intrinsic motivation of being able to learn something and help the rest of the foundation learn something.” After a while, Brest and his colleagues realized that there was too much focus on grantee organizations that had failed, rather than on potentially broader strategy failures by the foundation. The emphasis has now shifted to how the foundation itself has failed and what it can learn from this failure. Encouraging open and purposeful conversation about failure is one of the best ways that a philanthropic—or any other—organization can get better at what it does and achieve more impact in the world.
Paul Brest incentives discussions on failures.
Matthew Plummer is a Senior Associate Consultant in Bridgespan’s Boston Office, playing an integral role in assembling Bridgespan’s video collection “Conversations with Remarkable Givers." Prior to joining Bridgespan, Matthew worked as an operations manager at McMaster-Carr Supply Company.
Posted: 3/13/2013 11:49:28 AM by Matthew Forti with