This blog originally appeared on the Stanford Social Innovation Review website. It is co-authored by Bridgespan Associate Consultant Colin Murphy.
As President Obama was recently inaugurated for his second term, it is worth asking what made his campaign succeed in the face of such strong economic and political headwinds? Nearly every analysis we’ve read suggests that the use of data and analytics was key factor.
Nonprofits can learn a lot from the way the Obama campaign approached performance measurement. For although the campaign’s resources dwarfed those of the typical nonprofit, the measurement practices it followed mirror those of high-performing organizations.
Focus on cost per outcome. Dan Wagner, the campaign’s chief analytics officer and the man credited with much of the success of Obama’s data team, considered his scope “the study and practice of resource optimization for the purpose of...earning votes more efficiently.” With this mandate, the campaign’s advertising team bought ads on programs that offered the greatest number of persuadable voters per dollar, instead of simply trying to reach the biggest audience. This practice led to unorthodox ad buys in smaller markets that diverged from the strategy of the Romney campaign.
High-performing nonprofits have a similarly relentless focus on improving their productivity, defined as cost to achieve their primary outcome. For instance, Jumpstart, an early education nonprofit, defines its success as cost per child to achieve proven gains in school readiness. By standardizing best practices, investing in good overhead, and using measurement to learn and adjust, Jumpstart and others achieve sustained improvement in the one measure that best captures what they are aiming to achieve.
Tap into the best available evidence and expertise when designing programs. When Obama volunteers in swing states knocked on doors, they read from a script that asked potential voters either to describe their plan to get to the polls or to sign a small voter commitment card with a picture of Obama. Both techniques were drawn from social science research about what actually gets people to take action. In fact, the campaign solicited advice from a team of behavioral scientists, including Professor Richard Thaler at the University of Chicago, co-author of the much-discussed 2008 book Nudge.
High-performing nonprofits are also constantly scouring the research, keeping in contact with evaluators and other experts, and ensuring that their practices and programs integrate the best knowledge from the field—all of which can help improve the quality of their work.
Segment and target. According to one account, the campaign learned some important lessons from looking closely at its data. Its data system could assemble individual profiles of voters and donors, allowing for an unprecedented level of “micro-targeting”. For instance, they found that George Clooney had a strong influence among 40- to 49-year-old women, the demographic group most likely to hand over cash. The campaign therefore offered a chance to dine in Hollywood with Clooney and Obama – raising huge sums of money. They then replicated the event on the east coast with Sarah Jessica Parker, an east coast celebrity with similar appeal to this demographic of women.
Nonprofits shouldn’t just measure outcomes. They also need to measure inputs and outputs, such as demographic information on their constituents. High-performing nonprofits go further by analyzing the relationships among these inputs, outputs, and outcomes—a practice often overlooked in the end-of-year reporting rush. Thoughtful analysis and segmentation can allow leaders to see which types of interventions work best for which groups of beneficiaries, and ultimately to make data-driven decisions that can improve their impact.
Invest in a cross-functional data system. Before the Obama campaign even got underway, the Democratic National Committee invested in a data system that connected its voter database to the Obama campaign’s. By doing so, it learned who had volunteered, made a donation, and visited the campaign website—data that informed the kinds of segmenting and targeting activities described above.
Nonprofits make use of all the data at their fingertips to manage and improve their programs. When its performance management data system can integrate program data with data from government surveys, volunteers, peers, and the like, a nonprofit can achieve a much more nuanced understanding of how it to reach constituents and create impact.
Make measurement a priority. Obama’s internal data science team was reported to be more than 10 times larger than Romney’s, who outsourced some of his analysis to less-responsive consulting firms. After painful losses for Democrats in the 2010 midterm elections, the campaign believed a stronger investment in data science would be critical; they made the difficult decision to invest more resources here and less elsewhere.
Most nonprofits see measurement as a discretionary investment that can be delayed or eliminated in tough times. But many of today’s most effective nonprofits became high-performing in part by making the tough decision to invest in data systems, measurement staff, and evaluation, even when it might mean having less available for current services.
By following these measurement practices, the Obama campaign focused their resources on the most effective interventions, made smart resource allocation decisions, and adjusted rapidly as the context changed. One telling example of the latter: Late in the campaign, Obama made a highly successful appearance on the social networking website Reddit, which many of the President’s senior aides had never heard of, because the data team had determined that its users represented key turnout targets.
The Obama campaign took what author Sasha Issenberg, who closely observed the campaign’s data strategy, called “a decisive break with 20th-century tools for tracking public opinion.” What do you believe it will take for nonprofits to follow a similar course in their measurement approaches?