January 2013
The good news is that people are being more candid and open about their mistakes and field experiences. The bad news is that institutional attention spans seem to be short. But then, there’s good news about impact evaluations influencing policy. Unfortunately, there’s bad news that we couldn’t include all the items we found this month. Fortunately, there’s good news that I included a cartoon explanation of statistics - it was just too good to leave out. On balance? Mostly good news!
Regards,
William D. Savedoff
Senior Fellow
Center for Global Development
|
Learning from failure |
Researchers and policymakers increasingly admit that plans don't succeed all the time and are learning instead from mistakes, unforeseen events, and outright failures. The authors of "Learning from Experiments that Never Happened" share their experience to combat publication bias while giving other researchers an idea of what can go wrong in the field. (McKenzie and Goldstein blog about this too.) Or read about the top 5 evaluation mistakes from Christel Vermeersch who is seeking feedback on the World Bank's impact evaluation tool. The Millennium Challenge Corporation's openness about its recent farm extension evaluations also provides a window on how implementation can go awry. And 3ie has published field notes about studies conducted with its grants. This willingness to share problems is an encouraging sign of increasing maturity in the field.
Image: Flickr user Stefan Zabunov |
|
|
European Commission less committed to evaluation? |
Over the last 10 years, aid agencies have generally given greater resources and attention to improving independence and rigor of their evaluation programs. One manifestation of this trend has been the creation of high-level evaluation departments, often reporting directly to supervisory boards. Therefore, it is discouraging to read a blog from ECDPM about organizational change at the European Commission’s Directorate-General for Development and Cooperation (DEVCO). The blog notes that until this month, the organization’s evaluation unit “reported directly to DEVCO’s senior management based on best practice for safeguarding the independence of evaluations.” Now evaluation has been relegated to a lower category line unit.
Image: Ted Collins
|  |
Do impact evaluations ever get used? |
Yes. Proposals for cash transfer programs and debates over microcredit are informed by a growing body of evidence from earlier experiences. Generating this knowledge base is the main value of impact evaluations. Nevertheless, people still like to see impact evaluations directly influence policymakers. Case in point: researchers with Innovations for Poverty Action showed how a rainfall insurance program in Ghana could help farmers deal with uncertain weather and the government subsequently adopted the approach on a large-scale. Last fall, the first payments made under this scheme helped participating farmers. Slam dunk.
Image: Rob Fuller for IPA
|  |
| Resources |
- Karl Hughes candidly discusses the experience of program effectiveness reviews at Oxfam, asking “When We (Rigorously) Measure Effectiveness, What Do We Find?”
- Chris Blattman asks “Does the micro evidence tell us anything important about development?”
- An impact evaluation design course for researchers, project managers and practitioners will take place March 18-22 at the Institute for Development Studies in the UK. Apply by Feb 1, 2013.
- MEASURE Evaluation is holding an IE regional workshop in New Delhi, India for Population, Health and Nutrition Programs. Dates are March 18-20. Deadline: January 25.
- “Barriers to Household Risk Management” reports on factors affecting adoption of rainfall insurance in India (also in the American Economic Journal: Applied Economics).
- 3ie is looking for an Evaluation Officer in New Delhi to review and monitor 3ie-funded studies in addition to other duties.
- The Coalition for Evidence-Based Policy offers an online workshop for policy-makers on how to distinguish good evidence-based programs from everything else.
- In “Risk and evidence of bias in randomized controlled trials,” the authors argue that RCTs in economics are not providing readers with sufficient information to assess quality.
- In “Reshaping Institutions: Evidence on Aid Impacts Using a Pre-analysis Plan,” the authors show how their pre-analysis plan improved the credibility of their findings on governance reforms in Sierra Leone.
- J-PAL Global is offering an executive education course on evaluating social programs at MIT in Cambridge, MA on March 25-29. Application deadline is February 10, 2013.
- Paul Winters (American University) will present a study of Kenya’s Cash Transfer Programme for Orphans and Vulnerable Children (CT-OVC) on January 31st as part of the 3ie-IFPRI seminar series.
- Best course on statistics I’ve ever seen in a single cartoon. For a more traditional approach (including the cartoon) see Kevin Murphy’s introduction to Bayes Rule.
Thanks to Ted Collins for inputs and assistance in putting together this newsletter. |
|