This week, I’m delighted to present a Q&A with Evaluations Manager Nancy Boyer at FIRST®(For Inspiration and Recognition of Science and Technology), a New Hampshire-based nonprofit that inspires young people to be science and technology leaders through K-12 hands-on, mentor-based programs. Under Nancy’s leadership, FIRST recently completed a six-month effort to refine its theory of change, logic model, and measurement and evaluation plan. The Q&A below shares how FIRST approached this task, what resulted, and what advice they would have for the field.
Q: Please tell us about your role at FIRST® and your background.
A: I started at FIRST in July as Evaluations Manager. I’m responsible for our data collection efforts, internal and external evaluations, and pretty much everything else that relates to measurement. Prior to joining FIRST, I worked at a smaller nonprofit doing evaluation and program management; part of my job was to develop their theory of change and logic model so I had some experience in this work.
Q: Why did FIRST embark on an effort to refine its theory of change?
A: For a few reasons. First, we were undertaking a strategic planning process and quickly recognized there were a variety of views around what we wanted to focus on and hold ourselves accountable to. Second, we were about to undertake our most rigorous external evaluation yet — so we felt it was important to clarify our theory of change before this began. Third, with me coming on board, it felt like a good time for a fresh set of eyes to look at our theory of change.
Q: What did you do to get started?
A: The first step was for our board and senior leaders to decide what our ultimate outcome should be. They landed on an “increased likelihood that our youth would declare a STEM [science, technology, engineering and math] major in college.” I took this goal and attempted to draw out how our programs and resources lined up with it. I started by looking at old versions of our theory of change and logic model, spoke with staff, reviewed qualitative data from mentors, coaches and alumni, and went back to our prior surveys and external evaluations. Early on in this process I put a draft down on paper, and kept refining that draft after each conversation, each document review, and so on.
Q: How did you know whether your theory of change and logic model were plausible?
A: I did a thorough literature review to determine what it actually takes to get youth into STEM majors, and we had technical experts from the evaluation field and academia (who were simultaneously advising on our external evaluation) review our theory of change and logic model. Through this process we discovered, for instance, that student intention to major in STEM while in high school may be a stronger predictor of successful earning of STEM degrees than GPA or SAT scores. We also learned that youth are unlikely to choose STEM as a major if they aren’t interested in it by 8th grade; and that certain interventions — hands-on activities, mentorship, group learning, and so on – increased the odds of this happening. All of this influenced our final versions.
Q: Can you share some examples of how you refined your theory of change and logic model coming out of this process?
A: Most importantly FIRST now has one clear longer-term outcome that we see as our endpoint rather than a few. Also, the research strengthened our belief that youth who progress through all of our programs may be more likely to experience stronger effects, so we are focusing our measurement and evaluation efforts to assess exposure to programs, and working to improve the accessibility of each program to youth. Finally, through this process we brought to the forefront the best, research-based practices that should result in participants reaching the long term goal.
Q: How did you then use this work to improve your measurement and evaluation plan?
A: Most importantly, there was a heightened recognition that we needed to start tracking data at the student level — demographics; participation rates; changes in STEM interest, awareness, and knowledge; etc. Historically our surveys were completed by mentors and coaches who would assess the experience of teams [of youth] they worked with. This will be a big change for us. Also this exercise really helped us prioritize among all the possible indicators we could measure, and to determine the most appropriate role for internal data collection versus external evaluation.
Q: How long did all this work take and how resource intensive was it?
A: The work took place over about six months. My role began two months in, once leadership determined what FIRST wanted to hold itself accountable to. Over the course of the latter four months, I would estimate it took about 25 percent of my time. It is really a process with lots of iteration — you can’t sit down and knock it all out at once! There were no other costs for the research or external review.
Q: Having gone through the process, what advice would you have for other nonprofits?
A: The hardest part is getting started. I began the process by drawing out how I thought the programs worked by using boxes and arrows to help me think about how it all hung together. I then used this as a guide to test it out with others and refine as I gathered more information.
I also think it is valuable for an organization to start with what you are really trying to achieve, and then work backwards to figure out what you need to do to get there. Otherwise you can fall victim to assuming that what you’re doing is always the right set of activities.
Further, it is so important to inform this work from different avenues. I spoke with senior staff and program staff; read open-ended comments from evaluations of coaches and mentors; spoke with technical experts and reviewed the literature. I also drew on all the data we had collected in the past. I used the data to confirm my original assumptions about how FIRST works, and to make refinements to the model.
Finally, the logic model and theory of change need to be thought of as living documents that will continue to be refined as the organization learns through measurement, evaluation and feedback.
Definition of terms used in this post
Theory of change: an articulation of what an organization holds itself accountable for achieving (the benefits it intends to create for its target population) and how it will get there (its activities, resources, and context)
Logic model: a translation of the theory of change into measurable inputs, outputs, and outcomes
Measurement and evaluation plan: an articulation of how the organization will measure over the coming years, both through internal measurement and external evaluation. This plan typically includes key data sources/instruments, frequency and timing of collection, roles and responsibilities, estimated costs, and key assumptions tested/questions answered.
This work is licensed under a Creative Commons Attribution 4.0 International License.
Permissions beyond the scope of this license are available in our Terms and Conditions.