Ideas to Action:

Independent research for global prosperity

X

Global Health Policy Blog

Feed

This October 7 memo from Peter Orszag is interesting not only for its emphasis on evaluation, but also for its use of a carrot approach instead of (in addition to?) a stick approach to getting the participation of the various agencies and bureaus of the US government. (Thanks to Mattias Lundberg for flagging this memo for me.)

Beginning during the second world war, with a periodic surge of interest every few decades, the US government has attempted to improve governmental decision-making through the use of evidence-based evaluation. The current initiative announced by Orszag in this memo constitutes a resuscitation of the this laudable public sector objective. Strong features of the memo are its promotion of studies that evaluate multiple alternative interventions approaches against one another and its exhortation for improved “rigor”.

Indeed, the memo uses the word “rigor” or “rigorous” a total of nineteen times. For me, an essential ingredient of a “rigorous” evaluation design is an explicit strategy for identifying a counterfactual to the program being evaluated – what would have occurred without the program. In order of increasing rigor, such a strategy might use a before-after comparison, a comparison of a group that did not receive the intervention to those that did, or a randomized assignment of people to receive or not receive the program. Yet the memo never uses any of the words “counterfactual,” “matched,” “random” etc., instead leaving it up to each bureau to define the characteristics of a “rigorous” evaluation.

Furthermore, the memo omits any reference to defining a set of priority objectives that can be measured by agencies and programs of similar mission (e.g., the more than 20 U.S. government entities involved in development and foreign assistance) in order to aggregate and compare impact. And it neglects to urge that programs compare alternatives in terms of their costs as well as their effects. The memo only mentions the word “cost” as an attribute of the evaluation studies to be proposed, not as an object of evaluation in and of itself. Again it will be up to the responding bureaus to decide whether the cost of a program should enter into judgments of its relative “worth” compared to alternative programs.

I wonder how the foreign assistance agencies mentioned above will respond. The Millennium Challenge Corporation has a great web-site full of interesting evidence available to the public. In contrast, PEPFAR is rumored to have completed studies of the cost of delivering antiretroviral therapy in six countries, but the studies have not been released to the public. This is particularly strange in light of the Congressional mandate that they report the costs of their programs to Congress by September 30, 2009, as David Wendt and I blogged here. Did OGAC make that deadline? If so, why has the information been kept under wraps?

And how about the parts of DoD that deliver foreign assistance? Will they also be producing evaluations? If so, what criteria will they use? Hearts and minds won? Or perhaps something more concrete like wells dug per thousand dollars? And back to that aggregation and comparability point – if we can’t report out an aggregate impact of our collective efforts, how do we continue to sustain Americans’ support for foreign assistance; and if we can’t compare across like-missioned agencies, how can we appropriately reduce the fragmentation?.

Disclaimer

CGD blog posts reflect the views of the authors, drawing on prior research and experience in their areas of expertise. CGD is a nonpartisan, independent organization and does not take institutional positions.