Independent research & practical ideas for global prosperity 

Evaluation Gap Update
December 2013


I might scream the next time someone says “there’s no magic bullet.” If one existed, we wouldn’t call it “magic” … though we might call it a “miracle.” Impact evaluations are neither magical nor miraculous.  But they are a necessary part of our portfolio for assessing the effectiveness of programs (like a sanitation campaign in Maharashtra State) and for improving the information that goes into organizational learning – whether for community groups in Bihar, a large philanthropic foundation, or something as big and cumbersome and politicized as the U.S. government. In each case, being explicit about counterfactuals is critical to separating fact from fiction – even when it comes to taking candy from a baby.


William D. Savedoff
Senior Fellow
Center for Global Development

Toilets Make Us Taller?

In “Village sanitation externalities and children's human capital” Jeffrey Hammer and Dean Spears estimate the impact of the Total Sanitation Campaign in India on childrens’ growth. They estimate the campaign increased the height of four-year-olds, on average, by 1.3 cm. The paper also measures positive externalities and addresses methodological questions related to random assignment studies. Victoria Fan and Rifaiyat Mahbub provide context for this study in How a Toilet Makes Everyone Taller by discussing complementary evidence from cross-country, longitudinal, and cost-effectiveness analyses. 


Local Learning: Stories from India

Vijayendra Rao describes how a social observatory approach to monitoring and evaluation in India’s self-help groups leads to “Learning by Doing” in The Indian Express – drawing on material from the book Localizing Development: Does Participation Work? The examples show how groups are creating faster feedback loops with more useful information by incorporating approaches commonly used in impact evaluations. Rao writes: “The aim is to balance long-term learning with quick turnaround studies that can inform everyday decision-making.”


Evidence Body Building: Practical Evaluation Strategies 

The US government recently published Common Evidence Guidelines, developed by the Institute of Education Sciences (IES) and National Science Foundation (NSF). The Guidelines “identify the spectrum of study types that contribute to development and testing of interventions and strategies, and … specify expectations for the contributions of each type of study.” Using these guidelines, The Coalition for Evidence-Based Policy  has published a six-page paper, Practical Evaluation Strategies for Building a Body of Proven-Effective Social Programs: Suggestions for Research and Program Funders, to help policymakers understand the uses of impact evaluations.



If there is anything you think we should know about or would like to recommend for the next Evaluation Gap Update, please send it to [email protected].