New Hewlett Foundation Working Paper on Evaluation

The William and Flora Hewlett Foundation has just released a working paper entitled "Evaluation Principles and Practices." With this document and a prior companion publication, "Outcome Focused Grantmaking: A Hard-headed Approach to Soft-hearted Goals," the Hewlett Foundation has—with admirable transparency—placed its cards on the table with respect to how it seeks to carry out its work. Other foundations would help themselves and the field by following suit.

(In the interest of disclosure I should note that the Hewlett Foundation is a funder of The Bridgespan Group, and that Fay Twersky, who co-authored the paper with Karen Lindblom, is a friend from whom I have learned much about evaluation over the years.)

The first two principles articulated in the paper are especially noteworthy:

  1. "We lead with purpose. We design evaluation with actions and decisions in mind. We ask, 'How and when will we use the information that comes from this evaluation?' By anticipating our information needs, we are more likely to design and commission evaluations that will be useful and used. It is all too common in the sector for evaluations to be commissioned without a clear purpose, and then to be shelved without generating useful insights. We do not want to fall into that trap."
  2. "Evaluation is fundamentally a learning process. As we engage in evaluation planning, implementation, and use of results, we actively learn and adapt. Evaluative thinking and planning inform strategy development and target setting. They help clarify evidence and assumptions that undergird our approach. As we implement our strategies, we use evalu¬ation as a key vehicle for learning, bringing new insights to our work and the work of others."

These two principles are reciprocal. The first emphasizes that information gleaned from evaluations will be used for decisions and actions; the second that the point of evaluation is learning, adaptation, and course correction. These two points alone, if systematically followed, would help foundations avoid what in my experience are the biggest pitfalls for foundations in this area: undertaking evaluations that, for all the expense and effort involved, have no practical impact, and treating an initial theory of change as sacrosanct instead of something to be tested, refined, and strengthened with data and reflection.

The authors present the paper "not as a blueprint, but in a spirit of collegiality and an interest in contributing to others’ efforts and continuing our collective dialogue about evaluation practice." To honor this spirit, let me flag two questions that came to mind as I read it.

First, as foundations become more strategic and develop their own increasingly elaborate outcome goals, logic models, and evaluation plans, how can they resist the temptation to see and treat their grantees instrumentally, simply as means to the ends the foundation is advancing? The Hewlett Foundation has a track record and philosophy of providing general operating support when grantees’ goals are highly aligned with their own, one major way for a foundation to help ensure that "the field" they need this year will still be there when they need it in years to come. But Hewlett is an outlier in using this approach.

Second, how can foundations mitigate the powerful incentives that grantees have to share information in ways that maximize the appearance of impact and ROI for the funding they have received, thereby undermining genuine learning, reflection, and improvement—for foundations and grantees alike? Hewlett started a trend among foundations a few years ago by publicizing at least some self-identified failures in grant-making. For grantees, however, acknowledging failure is typically not a recipe for securing more funding from foundations.

I’d welcome your comments on the Hewlett paper and the challenges I’ve flagged above.

Posted: 2/6/2013 12:05:36 PM by Daniel Stid | with 1 comments


Trackback URL: http://www.bridgespan.org/trackback/9bd9b6e6-b81d-4d6e-9f99-3ddc9efb09e3/New-Hewlett-Foundation-Working-Paper-on-Evaluation.aspx?culture=en-US

Comments
Gary
I like those guiding principles. we need to do continuous research and do continuous improvement. Not many people are willing to do outcome research or product development research. The funding cycles and PR machines foster instant excitement to put on TV. We spent a year in the homes of families with a child who has a serious disability. We aked in depth questions about needs and desires for support. We spent over 7,000 hours gathering data and compiling it. The needs listed by most are daily, practical and rarely require highly skilled professionals. However, few groups provide those services. They are not flashy trips to Disney or acts requiring a lot of money. They tended to require friends and family who were good neighbors with practical assistance. They are chronic, often daily and behind the scenes.

Now to recruit, train, supervise and mobilize folks to do the stuff and provide these simple services. That will require staff and funding.
2/6/2013 4:22:32 PM