The movement to shift public and philanthropic funding to support "what works" has made tremendous gains in the last few years. A growing number of high-quality programs—e.g., nurse home visitation for low-income pregnant women, multidimensional treatment foster care for chronically delinquent youth, and pregnancy prevention for at-risk teenagers—have been tested in rigorous trials and found to deliver superior results for society. Government innovators are creating or redirecting funding streams to support these types of interventions. Far-sighted philanthropists are helping to pave the way for this transformation.
However, this progress will be increasingly precarious if we focus only on the "what"—i.e., the technical parameters of interventions in terms of target recipients, dosage, duration, staffing ratios, and so on. This emphasis must be counterbalanced and enriched with greater awareness of the "how"—i.e., how real people in real organizations need to work to replicate not just the evidence-based programs and practices but also the results that are ultimately the point.
Education reformers have come to recognize that the impact on student learning of a particular set of academic standards or curriculum ultimately depends on the effectiveness of the teacher using them in the classroom. The same holds true with social programs. The best interventions have to be carried out by living and breathing social workers, counselors, case managers, etc.—all of whom, like teachers, can be more or less effective at their jobs.
Indeed, as implementation guru Dean Fixsen and his collaborators have pointed out, "In human services, practitioners are the intervention. Evidence-based practices and programs inform when and how they interact with consumers and stakeholders, but it is the person (the practitioner) who delivers the intervention through his or her words and actions."
Who are these practitioners? What are their credentials and capabilities? How are they being recruited, trained, coached, and assessed? How are their individual-level results feeding back into and refining the organization’s overall approach to its work?
While many nonprofit organizations say they are providing evidence-based interventions, a much smaller subset have actually developed, applied, and continue to refine answers to these questions in ways that let them replicate and improve upon the original results.
The superior performance achieved by these organizations should not be taken for granted. Too many government agencies are asking nonprofits to deliver evidence-based programs and practices without appreciating what this entails or paying for what it costs to do so. This makes it difficult for these organizations to maintain, let alone enhance, the capacity and infrastructure needed to reproduce the results.
This failure on the part of government funders to come to terms with what it really takes in turn allows other service providers to claim—either through naiveté or cynicism—that they too are offering the intervention in question, when in fact their efforts are falling far short of what is needed.
We have to strike a better balance between the "what" and the "how" of evidence-based programs and practices. How can nonprofits that have made the investments and done the hard work to replicate these interventions best navigate in the current environment? What about those organizations at earlier stages of development? What needs to happen for government agencies to discern which nonprofits really are in a position to deliver evidence-based programs and practices—and to fund them accordingly? I’d welcome your take on these issues.
This work is licensed under a Creative Commons Attribution 4.0 International License.
Permissions beyond the scope of this license are available in our Terms and Conditions.