Program Theory: How it Runs and Why it Works

This is my second post in a series about Evaluability Assessment (EA). Last time, I introduced EA and its key components. Much of what we do at The IllumiLab is driven by the same motives and the same processes as Evaluability Assessment, including the articulation and assessment of the program theory.

Every program has a theory, whether implicit or explicit, clear or unclear, applied or not. Simply stated, a program theory articulates how a program is intended to work – what it does and how that produces change. EA takes what is implicit and makes it explicit, making what is unclear, clear.

“You cannot measure what you have not defined.”

That’s my motto! Too often in my work, I’ve encountered programs that are struggling to evaluate their programs in meaningful and manageable ways, and I can trace their struggles back to an unclear program theory. They don’t know how their programs are operating or what they expect to achieve.

However, many programs will be quick to say, “We have a Logic Model!” This is when I break out my side-eye. Really? Did your grant-writer create it all by his/her lonesome? Do you have a different one for each grant you submit? When was the last time you updated it? Does it reflect your assumptions and/or practice? Are the relationships between the columns actually logical?

Evaluability Assessment requires that people with intimate knowledge of the program and with diverse perspectives work together to articulate their understanding of the program’s intentions and design, to provide their feedback on it, and to articulate to what degree it’s actually being implemented as designed.

Drawing on Experience

There are two types of logic that should be balanced when developing a program theory: practitioner logic and research logic. Practitioner logic is comprised of the experiences, expertise, assumptions, and practice of program stakeholders. Involving program practitioners and other stakeholders in the articulation of program theory results in a more accurate depiction of the program, a shared understanding of intentions and practice, increased engagement and buy-in, and an increased interest in program improvement and evaluation.

Gather program implementers, managers, and maybe even participants and funders. Articulate the programs inputs, activities, outputs, and outcomes. Articulate your assumptions about how one leads to the next. I guarantee there will be people in the room with different understandings and assumptions. This is the time to get on the same page.

Drawing on Expertise

Program theories should be informed by and tested against research logic, too. Look for evidence of what has worked with similar populations, in similar settings. Look for evidence that doing X leads to Y (or doesn’t). Look for research to explain why doing X might unexpectedly result in Z. Look for research to describe specific best practices or interventions that should be incorporated into the program model.

Feedback and Reality Check

If you are involving stakeholders in the articulation of your program theory, you are likely gathering feedback along the way. You might hear, “Wouldn’t it make more sense if we did B instead of A?” or “But we don’t actually do it that way” or “It never made sense to me why we do X.” Don’t dismiss those questions. Make note.

Trevisan & Walser (2015) identify five types of questions on which we should seek feedback:

  1. Program Perspectives – These questions gather diverse perspectives on the program. How do stakeholders perceive the program, its components, and its implementation? What are the program’s goals? What should they be?
  2. Program Context – These questions seek to understand the cultural and political context surrounding the program. How do different groups perceive or experience the program? How do different groups perceive or experience evaluation of the program?
  3. Program Implementation – These questions seek to understand how well theory and reality align. How is the program being implemented? How are resources allocated? How has the program changed and why?
  4. Research Logic – These questions test your theory against research. Does research support or challenge the program model? What research could inform improvements to the model? How have programs like this been evaluated?
  5. Methodological Scoping – These questions help assess the feasibility of further evaluation. What data is available, as it relates to the program components you’ve just outlined? What data can be collected in meaningful and manageable ways? Do we trust our data?

The Bottom Line

Articulating your program’s theory is an essential first step in any type of evaluation. It’s how you build a shared understand among your team, articulate and test your assumptions, and lay it all on the table so you can define and examine it. You cannot measure what you have not defined.

Next time, we’ll look at how to use the findings of an Evaluability Assessment.


Walser, M.S and Trevisan, T.M. (2015). Evaluability Assessment: Improving Evaluation Quality and Use. Washington, DC: Sage.