October 10, 2012

On Sports and Performance Assessment

In this guest post, Kevin Rafter of The James Irvine Foundation discusses the challenge of assessing performance in philanthropy, and shares how Irvine has tackled this challenge in a pragmatic, learning-centered fashion.

By: Carole Matthews

(Note to our subscribers: Our apologies to those who received a message last week asking them to log into our website, which inadvertently occurred as we were transitioning to our new website platform. Please also visit our newly launched performance measurement publications and tools center.)


This week Matt Forti is excited to welcome Kevin Rafter, manager for research and evaluation at the James Irvine Foundation. In this guest post, Kevin discusses the challenge of assessing performance in philanthropy, and shares how Irvine has tackled this challenge in a pragmatic, learning-centered fashion. 

The James Irvine Foundation is the largest multipurpose foundation in California. Established 75 years ago, the majority of our work focuses on three program areas: Arts, Youth, and California Democracy. As manager for research and evaluation at the Foundation, I lead evaluations across all of our program areas as well as our annual foundation-wide performance assessments. I spend a lot of time thinking about measurement in philanthropy.

As I get caught up in Bay Area baseball excitement, with both the Giants and A’s in the playoffs, I have been thinking about several interesting parallels between the ways that we assess performance in sports and the approach we have used at Irvine to assess our performance as a private foundation. This idea first started with a sense of envy, because sports provides such a variety of scores, statistics, and rankings, all kinds of data you can use to describe performance and measure achievement. The popularity of sabermetrics in baseball illustrates the value of all these numbers. In philanthropy, we have far fewer common measures of performance, particularly in fields like arts and civic engagement where Irvine works.

Another interesting contrast with sports is the structure of games and seasons, which provide clear endpoints to update measures of performance for players and teams. The end of a season provides the opportunity to take stock of teams and players before you start over the next year. In contrast, there are few clear endpoints in the work that foundations support. Philanthropy tackles big audacious goals that take a long time to realize. Furthermore, foundation endowments free us of market constraints so we lack most of these useful time boundaries and defined measures.

In several ways, our annual foundation-wide assessment framework has allowed us to address these challenges to measuring foundation performance. We use the framework, developed with our Board of Directors after a strategic planning process in 2003, to assess progress towards advancing Irvine's mission and our organizational goals. The framework has six areas divided into two broad categories – program impact and institutional effectiveness. An overview of the framework is available on our website and was recently discussed in an SSIR article entitled “Assessing One’s Own Performance.”

Comparison is key to any kind of evaluation or assessment. In sports, we see this all the time in the rankings and analysis about different teams and players. As a multipurpose foundation we lack clear common measures of outcomes for our work. The easiest things to count, the grant dollars we award, are poor proxies for the outcomes we are working towards. It is also unrealistic to compare progress across the domains of our different program areas. Instead of comparing apples and oranges, we look at a common set of questions across the programs – What is the context? Are we making progress? What did we learn? What do our partners think? Answering these kinds of overarching questions (using both qualitative and quantitative measures) enables us to understand our performance as a whole rather than a series of parts, and focuses us on learning and adaptation.

Reviewing our performance with the board through an Annual Performance Report (comprised of our answers to these questions, plus more traditional grantmaking summaries) has created a time structure for our work. We know that our program work proceeds along independent timelines with goals that will take years to realize. In this context, the annual reporting calendar can feel like arbitrary and perhaps even unfair timing. Nonetheless we have found it valuable to take stock each year and focus on measuring our progress against short-term indicators rather than waiting for final results. Over time, our annual reports provide a record of progress and have helped us see where we stand in each of our program areas, much in the way a team takes stock of their performance at the end of a season. There is a period of assessment and reckoning on the results of a given year when we ask – What’s working well? Where do we need to change our approach? Then before long we turn our attention to the next season, always thinking of how we can improve our performance.


Creative Commons License logo
This work is licensed under a Creative Commons Attribution 4.0 International License. Permissions beyond the scope of this license are available in our Terms and Conditions.