BLOG POST

Hot Topic, Cool Heads: Impact Evaluation Debated at CGD-3ie Conference

July 24, 2013

Believe it or not, one of the hottest topics last week in Washington DC wasn’t the weather but impact evaluations. The capacity of our meeting rooms hit maximum when over 100 people registered for “Impact Evaluations: Can we learn more? Better?”—a conference co-hosted by the Center for Global Development and the International Initiative for Impact Evaluation (3ie).

The conference was an opportunity to take stock of the current production of studies that aim to attribute changes in outcomes to particular interventions. In 2006, the Center published a working group report which argued that too few good quality impact evaluations were being conducted, what it called an “evaluation gap.” In response to that report, 3ie was created in 2009. Now, four years later, the time seemed ripe to look at what has happened and consider what else might be done to make sure good evidence is available and used in improving public policy.


Speakers in the morning showed that, in fact, there is a great deal more good research being done. The number of impact evaluations being published has more than tripled between 2007 and 2011. The total, about 120 in 2011, is still far less than is probably needed if you consider that there are more than 100 countries working in more than a dozen sectors with numerous interventions worth assessing.

Source: 3ie Impact Evaluation Database, accessed June 2013.


Also in the morning:

  • Nancy Birdsall challenged participants to question whether impact evaluations are addressing important questions, highlighted the value of 3ie’s independence as a funder, and asked why the World Bank hasn’t become a 3ie member yet;
  • I described how impact evaluation work has evolved, provided a context for methodological disputes and proposed some collective actions that would further build the evidence base;
  • Suzanne Duryea described a cultural shift in international organizations, with proposals for doing good evaluations becoming a normal part of project preparation;
  • Scott Rozelle described how research standards have become stronger and how the Chinese government deals with linking evidence to policy;
  • Justin Sandefur summarized what we have and haven’t learned about interventions in education and demonstrated how questions of internal and external validity should influence choices over which kinds of studies to do and how to interpret them; and
  • John Hoddinott offered insights from the literature on nutritional interventions and encouraged greater humility and care in interpreting and using studies.


The afternoon was dedicated to small group discussions about the challenges to using evidence from impact evaluations in public policy decisions. Howard White (3ie) and Orlando Gracia (from Colombia’s National Planning Department) kicked off the discussion by outlining the problems and opportunities they’ve identified regarding the use of evidence. After a process that involved deliberating in small groups and then sharing ideas in plenary, two important themes emerged – the need for researchers to work harder at ensuring the relevance of their studies and the importance of linking policymakers into the process of defining questions and designing studies. David McKenzie (World Bank), Annette Brown (3ie) and Richard Manning (3ie) closed the day with reflections on what the international community, and 3ie in particular, could do to make this process of evidence-to-policy work better.


The Center is going to use these results to revisit some of the recommendations from the 2006 working group report and 3ie’s Board will be using the results to inform their ongoing strategy discussions. In a field that often seems primed for tempers to flare, especially when the weather’s hot, I particularly appreciated receiving this note from one of the participants:

One extremely refreshing aspect of the day: We spent several hours with a big, diverse group discussing the relationship between rigorous evidence and policy formulation without it degenerating, as it so often does, into (a) anti-intellectual bashing of academics and the irrelevance of research, or (b) a knee-jerk presumption that we researchers know the answers, and the policymakers just need to listen us.  That's a precious and delicate balance to preserve.

For my co-host Howard White’s reflections on the event, view his blog here.

Disclaimer

CGD blog posts reflect the views of the authors, drawing on prior research and experience in their areas of expertise. CGD is a nonpartisan, independent organization and does not take institutional positions.

Topics