Is Your Program Evaluable?

Evaluation Isn’t One-Size-Fits-All

I’ve written before about the many types of evaluation and the various kinds of questions it can help answer, many of which aren’t familiar to everyday practitioners. When most folks hear “evaluation” they think someone is asking, “Is your program effective at achieving its outcomes?” Yes, that’s one kind of question that some kinds of evaluation can help answer. But when it comes to outcomes, not every program is evaluable, yet.

But there are so many more questions that can be answered with so many other types of evaluation. You can evaluate different aspects of a program and for many different reasons. Evaluation can be used to inform or improve program design, to understand how it’s being implemented, to determine whether it’s a good fit for participants’ needs, and yes, to see what change is happening in the lives of participants. Not all types of evaluation are appropriate for all programs.

The questions you can ask and answer through evaluation depend on your program’s life stage and stability and how you want to use the information you gather, among other things. As evaluation has evolved, it’s clear that not only can evaluation “determine if programs are worthwhile, but also determine the best way to produce worthwhile programs” (Trevisan & Walser, 14). Now, wouldn’t that be something?!

What Does it Mean to be Evaluable?

No doubt, every program can be evaluated, in some way. That is, every program is evaluable. And you can even try to evaluate the outcomes of any program. That doesn’t mean you should. That doesn’t mean the evaluation will produce meaningful, useful learning that you should trust and put your weight on. Instead, there might be something else you should be evaluating first – perhaps the program’s design or implementation?

Twice, I’ve encountered nonprofits who sought out my support to design outcome measurement and management tools and processes. Before we could do that, though, we had to have a sense of what their programs could and should be able to achieve. That required understanding how their programs were being implemented and utilized. In one case, the program was operating in multiple sites and each was operating differently. How could we know what outcomes were applicable and meaningful when we didn’t know what the programs were doing? In the other case, the program had never quite articulated its intentions and wasn’t sure it was operating according to assumptions. In both cases, we realized they weren’t ready to evaluate outcomes, because there were other questions we had to answer first.

This is where Evaluability Assessment (EA) comes in . . .

What is Evaluability Assessment?

Evaluability Assessment (EA) emerged in the late 1970s and “was originally thought of a pre-evaluation activity primarily used to determine the readiness of a program for a productive outcome evaluation” (Trevisan & Walser, 1). Over the years, there have been various EA models, each with their own steps. However, they’ve all shared a few key components, which can add value at all different stages of a program’s development and evaluation life cycle:

  • Defining the Program Theory – What is the program supposed to do, and how is that supposed to bring about desired change?
  • Comparing Intentions to Actions – To what degree is the program being implemented according to its theory?
  • Assessing Plausibility – Given the program design and its implementation, how likely is it that the program can achieve its intended results?
  • Recommending Improvements – What can be done to better align implementation with intentions and to increase the plausibility of achieving positive results?
  • Recommending Feasible Evaluation – Given the program design and its implementation, what are the best ways to further evaluate the program?

First Things First

I’ve said it before and in some many different contexts. We need to ask the right questions, for the right reasons, and in the right order. First things first. What’s my motto? You cannot measure what you have not defined. You cannot evaluate the outcomes of a program whose design and implementation aren’t clear or stable. Do you understand what your program is supposed to do? Do you know how it’s being implemented and how closely that resembles its design? Do you have reason to believe that this design can and should achieve results? Have you articulated and tested those assumptions?

Nonprofits feel a tremendous amount of external pressure and a sense of urgency when it comes to measuring outcomes. But you won’t regret spending some time on EA first, so you can increase the likelihood of your program achieving results and increase the likelihood that an outcomes evaluation would be productive.

Next time, we’ll dig into a key component of EA – articulating your program theory – and the who, what, how, and why of that. Warning: I’m gonna talk about Logic Models and Theories of Change.


Walser, M.S and Trevisan, T.M. (2015). Evaluability Assessment: Improving Evaluation Quality and Use. Washington, DC: Sage.