You call that Evaluation?

cropped gears & headThis is my second of four posts that unpack key questions and lessons from the 2016 State of Evaluation Report from Innovation Network . As much as I love this report, one thing stumps me every time. The report doesn’t define what it means by “evaluation.” When a word has the power to strike as much fear into the hearts of nonprofit leaders as “evaluation” does, I think definitions matter.

I first started wrestling with the definition of “evaluation” when I found that word in my job title nearly three years ago. I thought, “What I do is more fun, more intuitive, and more relevant than evaluation!” Clearly, I held some of the preconceived notions I mentioned a moment ago. So, what exactly are we talking about?

A Definition

An oft-cited definition from Michael Scriven, a leader in the field of evaluation, says that evaluation is the “systematic investigation of the merit, worth, or significance of any object.”  This definition implies not only something about methods but also about motives.

Types of Evaluation

As straightforward as that seems, the world of evaluation is vast and evolving; there are dozens of types of evaluation and different sources define and distinguish them in different ways. The Innovation Network reports lists several: outcomes, performance measurement, process/implementation, formative, impact, and return on investment. For an overview and comparison of each type, check out this resource from the CDC.

Though they all share a similar process, they each aim to answer slightly different questions. Here are some common evaluation questions:

  • What does this population or community need?
  • Is this intervention feasible and a good fit for this problem?
  • What are we learning along the way? What can we tweak and improve?
  • What exactly did we do? How’d it go?
  • Did the desired change (as we’ve defined it) occur?

Interestingly, when most people think of “real” evaluation, they only think of one type of question:

  • To what extent did our intervention cause that change?

It’s true; answering questions like that requires more skill and rigor. In my experience, few nonprofits can and do conduct that type of evaluation. But that shouldn’t discourage or discredit others’ efforts to answer other types of questions using other types of evaluation and learning.

The Heart of Itdreamstimesmall_26592510

A focus on types, tools, and jargon, though, is putting the cart before the horse. In my opinion, the heart and soul of evaluation is curiosity. It’s about asking the right questions for the right reasons and answering them in ways that are meaningful and useful. And wouldn’t you know, there’s a name for that too – evaluative thinking.

Thomas Archibald and Jane Buckely have defined evaluative thinking as “a cognitive process in the context of evaluation, motivated by an attitude of inquisitiveness and a belief in the value of evidence, that involves skills such as identifying assumptions, posing thoughtful questions, pursuing deeper understanding through reflection and perspective-taking, and making informed decisions in preparation for action.” (The emphasis is mine.) For more on that, check out this blog post.

Now, THAT sounds fun, intuitive, and relevant! Doesn’t it?

However, evaluation can and often does occur outside a context of evaluative thinking, unfortunately. When evaluation is driven by external forces (i.e. funders, accreditors, and other power-holders) or a sense of obligation, it’s often lifeless and useless (or at least not as useful as it could be). But when it’s driven by internal motives and curiosity and a desire to learn and improve, it’s invaluable!

I think why we evaluate is as important as how we evaluate.