BLOG POST

The New USAID Evaluation Policy is Not Getting Nearly Enough Attention

February 01, 2011
This is a joint post with Rita Perakis.

USAID’s new evaluation policy, announced by Raj Shah at a CGD event on January 19, and written about by Bill Savedoff already on this site here, is not getting nearly enough attention.  It not only outlines a new policy.  It amounts to fostering a new culture, of transparency and learning.In a presentation on the new policy hosted yesterday by Carol Lancaster, Dean of the Georgetown School of Foreign Service, Ruth Levine of USAID said the new policy responds to the “need to learn” and to “generate accountability”, noting there can be tension between those two.Here are things to like about it beyond what Bill already highlighted  – with some notes of caution (the “buts” below):

  • An astonishing 3 percent of program funds is meant to be spent on evaluation.  Given poor incentives in an organization where the mainstream activity is implementing not learning (as Ruth Levine, who authored the USAID policy made clear some years ago here), this kind of crude quantity rule is a good idea.  At least it ensures lack of resources won’t be a constraint.   But money is only an input of course and can be spent without generating real learning.   Thus the need for other “bright line” (quoting Levine) rules.
  • The policy ensures that there will be a goodly number of rigorous (as in randomized controlled trials) impact evaluations – though reasonably enough most evaluations will be “performance evaluations”  (see the policy for definitions).  That’s our conclusion based on the rule that heads of missions are supposed to identify at least one opportunity for an impact evaluation covering “each development objective” in their three to five-year plans– sounds like one impact evaluation a year, per country mission.
  • All above average size projects for a particular operating unit are to be evaluated (mostly performance evaluations).
  • Most evaluation will be contracted out to external third parties, or to grantees.  But: Except when the head of an operating unit decides USAID staff can do it. But: Let’s hope it’s not all spent on one or two indefinite quantity contracts to contractors who specialize in evaluation!  Unless if you specialize in evaluation you cannot be an implementer?
  • Evaluation designs are to be registered and shared upfront with country stakeholders and implementing partners – I hope this means to enable replication especially of analyses of the results of randomized controlled trials.  But: This happens “except in unusual circumstances”.  Would be good to know what examples the authors had in mind of exceptional circumstances.

All this strikes me as a big step ahead of the World Bank, where there is an Independent Evaluation Group that has done much good and frank work in the last decade, and where many research and operational staff are doing or overseeing rigorous impact evaluations and quasi-experiments. But is there a well thought through policy at the Bank about priorities for evaluation? About who should pay for what – bank or country borrower?  About what kind of evaluation for what programs and projects?  I don’t think so. (I’d be pleased to be wrong; if the “policy” is to evaluate everything that’s a recipe for publication bias on what seems to work and ignoring everything else).Because the Bank does spend a lot of (trust fund money) on rigorous impact evaluation it would be nice if it finally succumbed and joined the 3IE; it would bring experience and expertise, and of course ought to pay some serious dues, to help subsidize developing countries to finance  evaluation of their non-donor funded programs, and to contribute to the global public good the organization represents –by for example encouraging replication in multiple settings and thus external validity on assessments of promising interventions.And let me not harangue only the World Bank.  Consider the Millennium Villages program – much smaller but ambitious and visible, with potentially important lessons for the development community.  Contrast the approach USAID has set out, with an emphasis on third party and independent evaluation, to the internal evaluation to which the MV implementers have, so far, confined themselves.  For the potential of unintended optimism about results that that maybe is inviting see Michael Clemens’ assessment here.

Topics

DISCLAIMER & PERMISSIONS

CGD's publications reflect the views of the authors, drawing on prior research and experience in their areas of expertise. CGD is a nonpartisan, independent organization and does not take institutional positions. You may use and disseminate CGD's publications under these conditions.