In the nonprofit sector as well the for-profit world, innovation is a hot topic recognized as a powerful force for good. Unfortunately, we in the nonprofit world can't seem to agree on just what innovation is all about, and because of that, we're mystifyingly irresolute about whether evidence is innovation's friend or foe.
This puzzling fact was reinforced for me during a recent discussion about what makes a nonprofit organization "high-performance." One participant nominated innovation as a critical factor. To my astonishment, this stirred an impassioned dissent from another participant, a recognized and vocal proponent of evidence and accountability, who argued that in the nonprofit world the word "innovation" typically implies the generation of exciting new ideas, apparently free of any bothersome, killjoy demands for validation of merit.
I shouldn't have been so surprised. It was hardly the first time I had run up against the belief that evidence and innovation are sworn enemies. When I headed the Obama administration's Social Innovation Fund (SIF), people frequently challenged the accuracy of our name. Given our strong emphasis on evidence and evaluation, they would ask, shouldn't we be called the Social Evidence Fund?
There's an unspoken corollary to this belief: the idea that the demand for "proof" is just an excuse to choke off the flow of new ideas and practices. Considering that the nonprofit sector has so much riding on both innovation and evidence, it's vital to resolve whether they really are locked in eternal conflict.
Having dug deep into the question, I believe the answer is unequivocal: evidence is fundamental to innovation. Far from being an enemy, it should be the best friend any true innovation can have.
My belief is grounded in two fundamental premises. The first is that innovation has less to do with being "new" than being "better." The value of innovation is not the creation of attention-getting novelty concepts and prototypes but the development of superior practices, programs, and approaches that generate increased social impact for the cost, compared with the status quo.
The second premise is that hard evidence of relative performance is the most legitimate, productive way to determine what actually is better. There's room for qualitative considerations—expert opinions, anecdotes, leadership credentials, and the like—to help determine whether a new approach may have promise or to fill out a compelling story. But any reliable assessment of a program's relative merit must ultimately be grounded in fact.
That said, there are different types of evidence, with dramatically different levels of rigor and cost—and it's logical and appropriate that the standard of rigor and cost of evidence should vary depending on a program's stage of development and the amount of funding at stake. The standard for a brand new, untested concept might be no more than a strong hypothesis of value and a solid implementation logic, buttressed by data on comparable programs. For an early-stage pilot, data on utilization rates, cost efficiency, and user satisfaction might be appropriate to gauge its potential. But before scaling up that multi-site, mid-stage program, it may be time to consider undertaking a randomized control trial.
At its best, evidence serves as innovation's good friend by stimulating continued improvement and providing potential beneficiaries, funders and other stakeholders an objective basis for determining whom to turn to and to support. In this way, evidence can not only "cull the herd" but actually propel the growth and scaling of the best innovations, enabling them over time to become the prevailing practice. In fact, that's the hopeful theory underlying the SIF.
To be sure, there are plenty of opportunities for conflict between evidence and innovation, which must be diligently managed. Potential funders may demand unrealistically rigorous standards of evidence to assess relatively immature, still-evolving programs—potentially stifling the development of promising solutions. Ill-timed, poorly executed, or inaccurately interpreted evaluation studies can also prematurely choke off development. Or backers of a program with a robust empirical basis may hesitate to invest in further improvements (that is, continued innovation) for fear of undermining the program's evidentiary support and perceived competitive advantage.
What can be done to nurture and optimize the critical relationship between innovation and evidence? Here are three recommendations:
If our society is to make progress against our most significant challenges, we need to continuously develop improved solutions that yield greater impact for the money invested. We must therefore commit to the effective, rational use of evidence to identify promising programs and approaches and, over time, greatly expand their availability. In short, innovation and evidence had better be fast friends, or we can never get where we need to be.
This work is licensed under a Creative Commons Attribution 4.0 International License.
Permissions beyond the scope of this license are available in our Terms and Conditions.