Share

Recently, I was called for advice by someone who will be running a workshop attended by people who implement and evaluate programs. She asked me to help her anticipate the main objections raised against doing impact evaluations—evaluations that measure how much of an outcome can be attributed to a specific intervention--and to suggest possible responses. The FAQs on our Evaluation Gap site offer some guidance, but in answering her question, I realized that five particular objections come up over and over again. They are:

Objection #1: “We already spend a lot on evaluation”

This may be true. Most organizations do spend money on evaluations and many of these evaluations are very useful for improving operations and monitoring outputs. The problem is that most of those studies don’t measure impact. So an organization may know that it implements its programs well, but it doesn’t know if its programs have the desired outcomes. A second common problem is that even when impact evaluations are undertaken, their rigor and quality may be poor. When we reviewed impact evaluations back in 2004, it was depressingly common to find a paragraph in the methodological section explaining that conclusions were compromised by the lack of baseline data. So, yes, your organization spends on evaluation, but it may not be commissioning impact evaluations and it may not be getting good ones.

Objection #2: “Impact evaluation methods can’t be applied in our field”

Debates over the best methodology are interminable when they are argued in the abstract. The only time you can decide if an impact evaluation can or cannot be conducted is when you have identified a particular question in a specific situation. (Thanks to Stef Bertozzi for pointing this out to me years ago and to Michael Clemens who illustrates this point with respect to a specific prominent initiative, the Millennium Villages Project).

Researchers have demonstrated considerable creativity in developing ways to study programs that address such diverse issues as corruption, teacher absenteeism, women’s empowerment and ethnic fragmentation. The first step is to identify the question. The second step is to figure out which method will give you the most convincing answer. For impact evaluations, that usually involves making a serious effort to compare outcomes from your program with some alternative – a counterfactual that you can either directly observe or plausibly construct.

Objection #3: “Impact evaluations cost too much”

Relative to what? The cost of an impact evaluation should be judged relative to the value of the information it will produce. So, a $2 million evaluation of a $500,000 program might be extremely cost-effective – a bargain – if the study helps policymakers decide whether or not to scale up into a billion dollar national program. This also means that impact evaluations should not be required of every program – rather they should be commissioned strategically to assess those programs that are unproven and are either widely used or are new and promising. Being selective in this way makes the overall budget for impact evaluation manageable relative to the overall budget for operations.

Objection #4: “We know that our programs work so it would be a waste of money”

If you really know that your program works, then it would be wasteful to conduct impact evaluations. We don’t need a study to know that feeding starving people will keep them alive. But most social programs aren’t this obvious. For years, job training programs in the U.S. were thought to be highly successful with good placement rates--until impact evaluations showed that placement rates were higher for comparable people who didn’t participate. Initiatives like conditional cash transfer programs that are widely supported today were initially the subject of considerable worry, that they would lead, for example to dependence, increased alcohol consumption or violence against women. Only because of good evaluations has it been possible to allay these fears and document the broad benefits. As with medical treatments, we need to have more humility about social programs and responsibly assess whether they are truly beneficial or cause harm relative to alternatives.

Objection #5: “Impact evaluations don’t affect policy decisions”

Evidence from impact evaluations is only one of many factors that influence policy decisions. If you think that the passage from evidence to action is linear and direct, you will almost always be disappointed. But the influence of impact evaluations is more complex than that. The questions they answer shape public debates over appropriate and effective policies. They also provide a base of information that is available when critical moments or opportunities arise to influence key choices. Rigorous impact evaluations have a much longer shelf-life than you’d think, while studies that are less rigorous or less conclusive quickly fade. Certainly more can be done to encourage the use of evidence in policymaking but this is a case where more and better supply can, I believe, have an indirect and ultimately decisive influence over the course of time.

My last suggestion for the workshop?  Foster a collaborative atmosphere by posing problems as a shared challenge to which you are seeking solutions and be honest about your own weaknesses and failures. People usually respond to such admissions by feeling safer about sharing their own experiences, doubts and mistakes. After all, we’re human beings and that’s how we learn.