Ideas to Action:

Independent research for global prosperity


Views from the Center


I frequently get inquiries from organizations that recognize the importance of rigorous evaluation and yet aren’t quite sure how they can do it. They see the growing number of random assignment or quasi-experimental studies and are attracted to the apparent objectivity and relative certainty of quantitative studies, but they are often reticent to dive into those approaches. Sometimes organizations have reasonable concerns about costs, lack of expertise, or the applicability of such approaches to the questions they care about. These are concerns that can be addressed in a wide number of ways. Other times, the concerns reflect an unwillingness to clearly state their goals, be explicit about their theories of change, or put their beliefs about what works to an objective test. Yet, this is exactly what is at stake with evaluation: are you willing to be proven wrong? If not, no technique will ever suffice.

Here are three organizations which don’t have the resources of the World Bank or Gates Foundation, yet each one has demonstrated a willingness to question their assumptions and try to meet the evaluation challenge in different ways.

Oxfam GB is a relatively large NGO that has an evaluation approach that would be recognizable to a large development bank or government agency, yet on a scale that tries to balance precision with staff time and costs. As discussed in Duncan Green’s blog, Oxfam GB randomly picks about 40 projects a year and tries to infer impact using different methods. When projects involve affecting the lives of many people, they compare project beneficiaries with a comparison group. When the projects are related to influencing policy or empowerment, they use process-tracing. If projects are randomly chosen, it helps Oxfam GB avoid cherry-picking the most promising projects and therefore allows them to say something more general about the performance of their portfolio. Randomly choosing projects also increases the probability of discovering things that staff might not have expected. The value and use of the studies will depend, as always, on their quality – particularly the degree to which their conclusions are credible. Are the counterfactuals appropriate? Is the causal chain clearly described and can alternative explanations be discarded?

The Inter-American Foundation (IAF) presents a very different model. The IAF started in 1969 with a “grass roots” approach which differentiated it from the prevailing development orthodoxy of the time. Researchers like Albert O. Hirschman and Judith Tendler were contracted to evaluate IAF projects and published this research based on case studies, relying primarily on inductive rather than deductive analysis. When the IAF hired me to study and assess their approach to evaluation last year, I discovered a system that was serving multiple purposes: accountability to its board, reporting to Congress, information for management. IAF was also in the process of figuring out how to build on their inductive approach in a way that could speak to the world of deductive researchers. One of their strengths is the use of local researchers (“data verifiers”) to visit IAF projects twice a year. As a result, when a project is completed, the IAF has a good concise database, along with access to someone from outside the community who has followed the project from start to finish. The IAF has begun to publish reports based on follow-up visits five years after the completion of a project. Most of the time, these studies do not have explicit counterfactuals, but when staff or external researchers do write up their work, whether in books or in the organizations’ journal, I find it provides useful ideas and credible stories about what has or hasn’t contributed to observed changes.

GlobalGiving is a very different kind of organization than Oxfam GB or the IAF. It acts more like a broker, providing information about projects to small donors who can contribute online. Their website provides reports from project leaders – not necessarily a source of unbiased information as they readily acknowledge. John Hecklinger, their chief program officer, told me about one of their efforts to get independent perspectives with “visitor postcards” – asking volunteers to visit groups that receive funding through GlobalGiving and send their impressions. Hecklinger recognized the benefits (at least you know the project exists) and limitations (bias from personal engagement with the project and no counterfactual) of such an approach. He noted that such postcards cannot measure impact and talked about looking for ways to improve both feedback and impact measurement. One intriguing approach relies on stories that people write. Based on responses to open-ended questions, for example, they analyzed whether an afternoon football (soccer) program for girls was promoting self-confidence. I could imagine using this kind of information (with an appropriate comparison group) to see if the stories the girls write at the beginning and end of the program change in ways that can be related to empowerment.

We still need more impact evaluations that use random assignment and quasi-experimental techniques. That’s why every organization should contribute to generating this kind of knowledge by themselves and by contributing to independent organizations that conduct this kind of research. In fact, this was the core rationale behind creating 3ie and the reason all development agencies and NGOs should become members and provide it with funds.

At the same time, organizations need their own feedback mechanisms that avoid certain basic problems – especially selection bias and subjective bias. The more the merrier. So when an organization asks how they can make their evaluation process more rigorous, there are lots of answers. From my perspective, the key is to be willing to be proven wrong and to find ways to test your assumptions. Sometimes that involves collecting information that you already gather, but in a more systematic way which allows you to derive conclusions at relatively low cost. Atul Gawande’s admonition is to count something new each day if you want to perform Better. Other times, you need to get someone else interested in your project. Maybe the question you’re asking is just too big, so team up with researchers and apply for a grant from 3ie, the World Bank or a private foundation. There’s so much to learn and so many ways we could do better.


CGD blog posts reflect the views of the authors, drawing on prior research and experience in their areas of expertise. CGD is a nonpartisan, independent organization and does not take institutional positions.