05/23/2013 | 3.5 mins |

Four Keys to Nonprofit Effectiveness That I've Learned Through Blogging

05/23/2013 | 3.5 mins |

I started this blog in April 2011 to share perspectives on two intersecting trends. The first was the mounting pressure on human service nonprofits due to the fiscal shakeout at all levels of government. The second trend was the coincidental opportunity to shift more of society's scarce resources to the programs and providers delivering superior results for the people they serve.

This will be my valedictory post on these topics. This week I am taking on a role as a senior fellow at the Hewlett Foundation, where I will be leading the development of a potential foundation initiative to support the health of democracy in the US. I have appreciated the chance to engage with a wide range of people on this blog via comments, tweets, guest posts and counter posts, not to mention old-fashioned conversations and debates that have sprung up around various topics.

While I started Cliff Notes to communicate my point of view, I quickly realized that perhaps the biggest benefit of blogging is the opportunity to better understand and learn from the perspectives of others. With that in mind, here are a few of the ways in which my own thinking has evolved over the past two years in response to the engagement with you all.

Leadership is the key to nonprofit performance measurement and improvement. At one point, I was convinced that getting the right measures and processes in place was the key to improving performance; now I realize leadership and the culture that stems from it is the key. When an organization's leaders are committed to, and its culture reinforces, the importance of measurement and improvement, a range of metrics and processes can work. But when the commitment and reinforcement aren't there, the best measurement systems will not move the needle.

We need to appreciate and make better use of the variation in performance within programs. When we speak of a government program failing to show impact, e.g., Head Start, we are often speaking of a common funding stream for programs operated by many different government and nonprofit providers. In Head Start's case, for example there are more than 1,750 programs across the country. Some are truly failing and should be defunded. Many are in the middle with room for improvement. Some are superb. Instead of arguing about the entire funding stream based on the aggregate results across this distribution, which masks excellence and mediocrity alike, let's understand what the best implementers are doing and work to spread their practices to support more beneficiaries more effectively.

How something is implemented is as important as what is being implemented. I used to think that the void was in our understanding of "what works"–that as we accumulated more experimental evidence on the efficacy of different interventions, we could use the validated solutions to solve chronic social problems. I have come to grasp that ensuring fidelity of implementation of "evidence-based" solutions is just as critical to replicating the results–and fidelity is in fact very hard to achieve at a substantial scale. Not impossible, just really difficult. I have also come to appreciate that a range of nonprofits and local government agencies, not just the developer of a given solution, can replicate programs with fidelity if they are challenged, supported, and funded to do so.

We should not shy away from practical experimentation with pay for-success. I have been a vocal skeptic on pay-for-success schemes, especially social impact bonds, as new applications have come into vogue, and I continue to think that some of the more elaborate designs floated by advocates are inherently impractical. But I have also come to recognize that many first-rate practitioners across state and local governments, nonprofit providers, and financial intermediaries are developing more practical permutations of this approach. And I readily acknowledge that the current system of funding and delivering social services is completely screwed up. So now I say, let's do some experiments with an eye toward learning from them. What have we got to lose?

Thanks for reading over the past two years. Those are my big lessons learned. What are yours?

And please follow me at @Daniel_Stid on Twitter for future updates about my new work.

Creative Commons License

This work is licensed under a Creative Commons Attribution 4.0 International License.
Permissions beyond the scope of this license are available in our Terms and Conditions.

Show Comments

Related Blog Posts

Transformative Scale 

Jeff Bradach 07/21/2016 | 3 min

Transformative Scale 

Jeff Bradach 08/31/2017 | 3 min

Knowledge Hopper 

Jeff Edmondson, Michael McAfee, Anne Williams-Isom 06/26/2017 | 4.5 min

Transformative Scale 

Nidhi Sahni, Elena Matsui, Lauren Hult 05/26/2017 | 7 mins