|
October 2013
I get the sense that the methodology debates over impact evaluation in development policy are entering a new phase. Broad brush arguments and attacks on straw figures are giving way to a new crop of analytical and empirical studies that help us see which approaches are more useful in particular contexts and for particular questions. The evidence? Papers analyzing tradeoffs between different sources of bias; a Colombian official discussing growing demand for evaluation; and historical lessons from evaluation efforts in the United States over the course of a half century. We also may be entering a new “age of transparency” for impact evaluation, with real-time peer review, and preregistration becoming the norm. Certainly, the many amusing misinterpretations of data will not disappear overnight, but someday we’ll get there.
Regards,

William D. Savedoff
Senior Fellow
Center for Global Development
|
My bias is bigger than yours?
|
|
In Context Matters for Size: Why External Validity Claims and Development Practice Don't Mix, Lant Pritchett and Justin Sandefur address a critical question in the interpretation of impact evaluations: will policymakers make better decisions using impact estimates from simple local studies or generalizing from rigorous studies conducted elsewhere? Their analysis of the literature on class size effects and gains from private schooling show that simple local studies are probably a better guide. Sandefur elaborates further in his blog, The Parable of the Visiting Impact Evaluation Expert. Another recent working paper from Jonathan Morduch and co-authors, "Substitution Bias and External Validity: Why an Innovative Anti-Poverty Program Showed No Net Impact" highlights the roles of substitution bias and dropout bias in shaping evaluation results and external validity.
|
|
|
Sinergia raises the evidence banner in Colombia
|
|
At the CGD 3ie impact evaluation conference in July, Orlando Gracia, Director of Monitoring and Evaluation in charge of Colombia’s public policy evaluation system – Sinergia – discussed the challenges of generating and using evidence. Gracia noted the increased demand for impact evaluations and described how his office has responded. He highlighted the challenges Colombia faces in establishing a “culture of evaluation.” Sinergia was created in 1991 but didn’t publish its first impact evaluations until 2002. It is described in a World Bank paper and assessed in a skeptical but positive review by Robert Klitgaard.
|
|
|
A practical history of random assignment
|
|
In a new book,Fighting for Reliable Evidence,Judy Gueron and Howard Ralston tell the story of how random assignment became a useful and accepted tool for assessing social policy in the United States. Tracing the use of random assignment from income maintenance and work support programs in the 1960s and 1970s, the authors argue that widespread acceptance began after Congress allowed states to experiment with welfare programs in the 1980s. As scholars and practitioners in the study of welfare reform, the authors draw on their personal experiences in describing how theoretical, political, and practical problems were addressed. The story of random assignment in the United States continues today with organizations like the Coalition for Evidence-Based Policy and J-Pal's new Regional Office for North America .
|
|
|