Recently, I sent out the final Evaluation Gap Update – a newsletter about impact evaluations and the institutions that fund them, implement them, or are supposed to be influenced by them. After 10 years, it seemed the right time to move on to other projects, particularly since numerous other resources have sprung up over this decade (many listed below!). Yet there is pushback on the growth of impact evaluations that sometimes worries me. I hear people say too many impact evaluations are being conducted (despite the need for the information they provide). I hear others claim that impact evaluations are irrelevant (based on a faulty model of how policymaking happens).
We started the Evaluation Gap Updates in 2005 to accompany the work of CGD's Evaluation Gap Working Group which concluded in 2006 with its report "When Will We Ever Learn." At that time, we documented how few impact evaluations were being conducted on public policies in low- and middle-income countries and recommended greater investment in this kind of research. Several foundations and a few aid agencies picked up the challenge to create 3ie. In this way, our work joined existing initiatives in developing countries, research centers, and aid agencies that were seeking to address the same problem. Today, more impact evaluations are being conducted than ever before.
This very success in mobilizing resources for learning has created a target for people arguing that we now have too many impact evaluations. Yet the numbers of studies are still very few relative to the questions we are asking and to the myriad programs underway around the world in multiple sectors. We certainly need other kinds of studies and different approaches to policy, but the specific need for impact evaluations is still, in my view, nowhere near being met. If anything, the world needs to commit more resources to impact evaluation than it is doing today.
The other criticism that I hear is that impact evaluations are not relevant to policies. This may be true if you expect a one-to-one correspondence between a particular study and a specific decision – granted, occasionally you can find such a link. However, evaluations do not really influence policy in such a reductionist way. Rather such influence occurs through a complex process of interpretation and knowledge. Studies enter a social space of debate, along with other kinds of information, and gradually alter the frames within which people think and act. In other words, for the topics addressed by these impact evaluations, better understanding filters into discussions and from there into the minds of policymakers, agency staff, and project implementers.
Even though the evaluation gap is closing and our Updates are finished, I believe the value of impact evaluations will be increasingly recognized. In part, I recall interviewing an evaluator with many years of experience who assured me that fluctuating support for evaluations was less like a pendulum and more like a spiral – each cycle left us at a somewhat higher level of capacity and learning. I’m also hearted by the growth in the number and quality of information sources that I perused for the Evaluation Gap Updates. The following list includes links to many good newsletters, blogs and databases which I have used and which can help you stay current on this dynamic field. And you can always follow what my colleagues and I are doing at the Center by subscribing to other related CGD Newsletters and checking our CGD Policy Blogs.
Newsletters about impact evaluation
Blogs with postings about impact evaluation
A list of 18 searchable impact evaluation databases and systematic reviews can be found here.
3ie has published a Working Paper which lists over 45 databases, search engines, journal collections, and websites that they use to populate their impact evaluation database.