With rigorous economic research and practical policy solutions, we focus on the issues and institutions that are critical to global development. Explore our core themes and topics to learn more about our work.
In timely and incisive analysis, our experts parse the latest development news and devise practical solutions to new and emerging challenges. Our events convene the top thinkers and doers in global development.
Legal empowerment of the poor, education, Africa, evaluating aid effectiveness
Justin Sandefur is a senior fellow at the Center for Global Development. Prior to joining CGD, he spent two years as an adviser to Tanzania's national statistics office and worked as a research officer at Oxford University's Centre for the Study of African Economies. His research focuses on a wide range of topics, including education, poverty reduction, legal reform, and democratic governance.
Financial incentives may reduce teacher absence and improve student performance, but they may also lead teachers and schools to simply exaggerate attendance. Zeitlin and co-authors report on an experiment in Uganda that combined pay-for-performance for teachers with a separate experiment that enlisted local parents to independently monitor teacher absence and report back via mobile phone.
When teachers were paid for attending school, their actual attendance increased, and so did the number of false reports. But the increase in bad information was more than offset by an increase in total information from parental monitoring, providing administrators with a more reliable overall picture of teacher absence. Despite inducing false reports, the results suggest that social welfare was higher with financial incentives.
It’s no surprise that rich countries outperform poor countries on standardized tests. But if you compare kids with similar household wealth across countries, that gap disappears.
Every three years, the Program for International Student Assessment, or PISA, releases a new batch of standardized test scores comparing 15-year-olds around the world on reading, math, and science. In countries that underperform expectations, such as the United States, the rankings inevitably provoke calls for education reform to imitate the school systems of high performers. (“We should be more like Finland!” “No, Singapore!”) Jon Stewart even mocked this ritual on the Daily Show:
I always feel bad for whatever country is just above America on these lists, because invariably that country is used as a standard for just how far we have fallen as a people. Thirty-sixth, beneath the Slovak Republic. I mean, those f***ing people eat their own vomit.
Setting aside the Slovakian-American math comparison, richer countries generally do better on PISA. A lot better. Indonesia was the poorest country to administer the test in 2012, and Norway was the richest. Their performance gap is huge: Indonesian students who score extremely well, with math scores at the 90th percentile locally, would still be in the bottom half of Norwegian students. Zero Indonesian students scored high enough on reading to clear the 90th percentile in Norway.
So does that mean poor countries such as Indonesia, Peru, or Vietnam should be looking to rich countries like Norway and the United States for education policy advice? The answer depends, in part, on how much of the test-score gap between rich and poor countries you think reflects superior school systems in rich countries and how much is due to the simple fact that American and Norwegian kids are a lot richer and have innumerable advantages both in and out of school due to that wealth.
What if you compared Indonesian kids with similar wealth levels? Would Indonesia still underperform? This is the question my CGD colleagues Amanda Beatty, Lant Pritchett, and I are currently exploring in a new paper. We started off by computing the relationship between wealth and test scores within each country, as seen in the graph below. 
There’s a lot of heterogeneity across countries. In the United States, the wealth gradient in test performance is quite steep, while in Norway it is very shallow (in line with your preconceptions of America and Scandinavia). And some countries, like Vietnam, are simply off the charts with spectacular performance given its relative poverty. But overall, it turns out that on average rich and poor countries are roughly on the same upward sloping line relating household wealth to PISA scores. What does this mean? Crudely put, Indonesian and Peruvian students score about where you’d predict them to score in the United States given their household wealth. Poor countries don’t do any worse on PISA than most OECD countries once you adjust for their socio-economic demographics, and some poor countries like Vietnam do considerably better.
(Note that we’re careful not to give any causal interpretation to the relationship between wealth and scores within countries here. Our argument doesn’t require that. It’s entirely possible — indeed very likely — household wealth proxies for lots of other factors: including household factors like nutrition and parental involvement, as well as within-country variation in school quality. The point is simply to ask how a child with a given wealth level is expected to score in country X, given all the advantages and disadvantages they’re likely to have at home and at school.)
To make this a little clearer, we decided to rank countries not by their average score, but by the predicted score of kids with the same wealth level in each country. (Basically, draw a vertical line through Figure 1 at a wealth score of 50, and see where countries intersect that line.) Comparing apples and apples, which countries do best?
Here the results are even more striking: If you compare students at the global median of household wealth, the test-score gap between rich and poor countries essentially disappears. There’s no correlation between a country’s average wealth and the test performance of students in that country who are at the global median.
There’s an interesting analog here to work on global income inequality. In both cases, after controlling for everything we can control for, the gaps between countries are still huge, as Michael Clemens, Claudio Montenegro, and Lant Pritchett documented in an earlier CGD paper on earnings. Likewise, Branko Milanovic has demonstrated that the vast bulk of global inequality is between countries — it’s where you’re born that largely determines your income. Similarly here, the gaps between countries remain huge even when comparing children with similar socio-economic backgrounds.
The enormous difference is that in contrast to income, you wouldn’t necessarily maximize your test scores by picking to go to school in the richest countries. A student with global median wealth in Turkey performs much better on reading tests than an equivalent student in Norway. And Vietnam beats out the best performers in the OECD such as Japan and Canada.
There is one tempting interpretation of the first graph above that we would caution against. If Indonesia, Peru, and the United States are all on the same line relating wealth and test scores, you could argue that means economic growth is secret to better education in poor countries. As they move up the wealth gradient, scores will rise. As Ludger Woessman and coauthors have shown for OECD countries, wealth has been associated with higher test scores within countries for a long time. But as OECD countries have gotten richer, scores haven’t gone up. We should be careful when converting cross-sectional correlations into time-series forecasts.
So why then do some systems deliver so much more learning than others? We have very little idea. At CGD, we’re getting ready to launch a new research program on what makes for an effective education system, and how to reform ineffective ones. Most of the countries we’ll be focused on — low- and lower-middle-income countries in Africa and South Asia — don’t appear in the PISA sample at all. As we start this research program, it’s daunting to realize how little we know not only about how to make reform happen, but even which systems perform well and which ones don’t. One thing is clear though: we shouldn’t assume rich countries hold all the answers.
 Technical footnote: The biggest challenge in this exercise was the potential for a high degree in measurement error when asking schoolchildren about their household wealth. This turns out to be crucial. Measurement error will tend to underestimate the slope of the relationship between wealth and test scores and, by failing to fully account for the wealth-score gradient within countries, exaggerate the cross-country relationship between wealth and scores. The regressions underlying all the results shown here an instrument variables approach to minimize this problem. More details forthcoming soon in the full paper. If you’re curious in the meantime, see the Stata code posted here.
As African leaders meet in Washington this week, one issue is not on the agenda: the poor quality of basic economic and social data in the region. Maybe this year’s GDP re-base in Nigeria, which resulted in an 89 percent increase, was a tip-off? While inconvenient to the #AfricaAscending narrative around town, our recent work suggests that many basic data are in fact systematically distorted.
In our paper, we find that misrepresentation of national statistics in education and health does not occur merely by accident or because of a lack of analytical capacity — at least not always — but rather that systematic bias in administrative data systems stems from incentives of data producers to overstate development progress.
Administrative and Survey Data Don’t Match
Comparing administrative and survey data across 46 surveys in 21 African countries, we find a bias toward overreporting school enrollment growth in administrative data. The average change in enrollment is roughly one-third higher (3.1 percentage points) in administrative than survey data (an optimistic bias that is completely absent in data outside Africa. Delving into the data from two of the worst offenders, Kenya and Rwanda, shows that the divergence of administrative and survey data series was concomitant with the shift from bottom-up finance of education via user fees to top-down finance through per-pupil central government grants. This highlights the interdependence of public finance systems and the integrity of administrative data systems. Difference-in-differences regressions on the full sample confirm that the gap between administrative and survey of just 2.4 percentage points before countries abolished user fees grew significantly by roughly 10 percentage points afterward.
Donors also play a role. In 2000, GAVI Alliance offered eligible African countries a fixed payment per additional child immunized against diphtheria-tetanus-pertussis (DTP3), based on reports from national administrative data systems. Building on earlier analysis by Lim et al. (2008), we show evidence that this policy induced upward bias in the reported level of DTP3 coverage amounting to a 5 percent overestimate of coverage rates across 41 African countries.
It’s Not Just Education and Health
Other work by Justin suggests that official estimates of consumer price indices have been inaccurate, and — once correcting for these accuracies — rates of growth and poverty reduction in Africa are modestly slower on average than published estimates based on official data.
Inaccuracies in basic data are due in part to perverse incentives created by connecting data to financial or reputational rewards without checks and balances. But the problem of inaccuracy is also related to political interference and statistical agencies that have been inadequately and inconsistently funded over the years. Together, these factors make up a political economy of bad data.
To get to a political economy of good data, our joint working group report with the African Population and Health Research Centre lays out some ideas: (i) fund more and differently; (ii) build institutions that can produce accurate, unbiased data; and (iii) prioritize the accuracy, timeliness and availability of the basic data on births and deaths; growth and poverty; sickness, safety and schooling; and land and environment, that policymakers and citizens can use to generate real progress in development.
Across multiple African countries, discrepancies between administrative data and independent household surveys suggest official statistics systematically exaggerate development progress. We provide evidence for two distinct explanations of these discrepancies.
Despite improvements in censuses and household surveys, the building blocks of national statistical systems in sub-Saharan Africa remain weak. Measurement of fundamental statistics such as births and deaths, growth and poverty, taxes and trade, land and the environment, and sickness, schooling, and safety is shaky at best.
This is a joint post with Alex Ezeh, Co-chair of the Data for African Development Working Group and Executive Director of African Population and Health Research Center.
Since the term “data revolution” was brandished in the High-Level Panel report on the Post-2015 Development Agenda, there has been a flurry of activity to define, develop, and drive an agenda to transform the way development statistics are collected, used, and shared the world over. And this makes sense — assessing the new development agenda, regardless of its details, will need accurate data.
But nowhere in the world is the need for better data more urgent than in sub-Saharan Africa — the region with perhaps the most potential for progress under a new development agenda. Despite a decade of rapid economic growth in most countries, the accuracy of the most basic data indicators such as GDP, number of kids attending school, and vaccination rates remains low, and improvements have been sluggish.
Over the past year, the Center for Global Development and the African Population and Health Research Center (APHRC) co-chaired the Data for African Development Working Group to explore the root causes and challenges surrounding slow progress on data in sub-Saharan Africa and identify strategies to address them. The Working Group’s final report offers insight on where governments and donors should focus their efforts to deliver on the data revolution in the region.
Challenges with data are largely systemic and political: The challenges surrounding the production and use of basic data are often not technical, but the result of underlying political economy and systemic challenges. The Working Group identified four primary challenges: 1) national stats offices lack independence; 2) data is inaccurate; 3) donors dominate priorities, and 4) data is kept behind closed doors.
Governments and donors should focus on the “building blocks” of national statistics: There have been gains in the frequency and quality of censuses and household surveys in sub-Saharan Africa, but national statistical systems in the region remain weak. Governments and donors should focus on the “building blocks” of national statistics systems — or data intrinsically important to the calculation of almost any major economic or social welfare indicator. These include births and deaths; growth and poverty; tax and trade; sickness, schooling and safety; and land and environment. Improving the accuracy, timeliness, and availability of these statistics will be critical to the success of the post-2015 development agenda, across every sector.
Actions in pursuit of a data revolution should be country-specific and government-led: For a truly sustainable data revolution in sub-Saharan Africa, changes must be initiated and led inside governments in coordination with donors and civil society. To this end, the Working Group identified three strategies:
Fund more and fund differently by allocating more domestic funding to improving national statistics (thus reducing donor dependency) and experimenting with pay-for-performance agreements with donors to enhance mutual accountability for progress on improving the core statistical products.
Build institutions that can produce accurate, unbiased data by enhancing the functional autonomy of national statistical offices, and experiment with new institutional models like public-private partnerships to improve data collection and dissemination.
Prioritize the accuracy, timeliness, and availability of the data building blocks by building quality control mechanisms into data collection and analysis and encouraging open data.
Where do we start? Try a Data Compact: A data compact could help mobilize and focus domestic and donor funding for progress on national statistical priorities. Data compacts would allow governments and donors to express intent to fund and progress on the critical “building blocks” of a national statistics system over multiple years, with clear and verifiable measures of progress, and provide a country-specific framework to innovate on funding mechanisms, engaging civil society and mobilizing new technologies for data collection and dissemination.
Bottom line: The data revolution must help modify the relationship between donors, governments, and producers of statistics to work in harmony with national statistical priorities. And both countries and donors will need to experiment with new approaches — not revert to business as usual — to truly revolutionize the way data is collected, used, and made public.
Read more about the Working Group’s findings and recommendations in the final report and brief. CGD and APHRC will continue to inform and track actions as the data revolution takes shape and we welcome your feedback.
If data wants to be free, then PovcalNet, the world’s leading dataset on global poverty, is happier today because it was recently made available for download in bulk by my guests on this week’s Wonkcast, CGD research fellow Justin Sandefur and research assistant Sarah Dykstra. Scraping the data was no easy task: it required devising code that queried the database for one answer at a time, 23 million times, over nine weeks, then reassembling the 8 million resulting data points answers into a single dataset. They then posted the dataset and a related paper online for the use of researchers around the world.
Justin and Sarah tell me that they were motivated to scrape the PovcalNet website in part because they needed the full dataset for their own research, and in part because they knew other researchers had a similar need. Lacking the full dataset, they and others previously had no option but to spend hours pointing and clicking, one number at a time, to get the specific information they needed. (The code needed to run the queries was beyond what we could manage here at CGD, so the pair turned to Sarah’s brother, independent programmer Benjamin Dykstra.)
Since individual data points were already online—albeit not in a readily accessible format—the project involved no “hacking.” I ask whether they tried first just asking the World Bank for the dataset. Justin explains that, "...the underlying raw data isn’t even available to many researchers within the Bank.”
“There’s a lively internal debate in the World Bank about whether or not this data should be public,” Justin tells me. “But not all data that the World Bank has are covered by the open data policy…it was pointed out to us that PovcalNet is not.”
Justin says that the entire process illustrates the importance of making research data publicly available.
“We’re living in a new era where there are a lot of people participating in this analysis and this conversation, and a million eyeballs can find lots of mistakes.” Justin says. “So let’s put all the data and the code in the public domain and open up that conversation.”
So, what exactly was the World Bank’s response to their efforts and the resulting new poverty estimates?
“Annoyance is probably the right word,” Justin says. “The stance of the research department now seems to be, reading between the lines, that ‘we don’t really trust these [new PPP] numbers, and we’ll reserve judgment on whether we should use them yet.’”
It’s an exciting story, with some unexpected twists and turns. To hear it, and learn what Justin and Sarah have planned next, tune in to the full Wonkcast.
If data wants to be free, then PovcalNet ,the world’s leading dataset on global poverty, is happier today because it was recently made available for download in bulk by my guests on this week’s Wonkcast CGD research fellow Justin Sandefur and research assistant Sarah Dykstra.
Public-private partnerships (PPPs) in education that combine public finance to provide free or subsidized access to privately delivered education are expanding in many developing countries, either to increase access where government capacity is limited or to improve learning outcomes—often with limited evidence on their success. This panel will bring together experts from the policy and research spheres to review what we know about the design of effective partnerships, the hazards to be avoided, and the frontiers for new research.
My guest on this week’s Wonkcast is Justin Sandefur, a research fellow at CGD whose recent work has focused on education in Kenya. One study examines the returns of private schooling, while another looks at the effects of contract teachers on student test scores. The results of these studies highlight shortcomings in public education, including failures of accountability and a dense bureaucracy.
Public vs Private Schools
Justin tells me he first became involved in Africa while working in Tanzania on a household survey measuring poverty and agricultural statistics, as a government employee. His insights into how government works—or fails to work—have since shaped his research on schooling in Kenya. His CGD Working Paper, The High Return to Private Schooling in a Low-Income Country (with coauthors Tessa Bold from Goethe University Frankfurt, Mwangi Kimenyi from the Brookings Institutions, and Germano Mwabu from the University of Nairobi) found that private schools in Kenya not only cost less than public schools, but produce better results.
“We focused our time estimating the impact of private schooling on test scores and proving to ourselves that going to a private school increases your test scores,” says Justin. “When we shopped that idea around in Kenya no one was surprised. But people had a hard time believing is that these private schools really are cheaper, operating on about half the budget of the government public schools.”
I ask Justin why private schools in Kenya get better results with less money.
“That is the 64,000 dollar question,” he says. “If we knew the answer to that I think we’d have the agenda for education reform in the developing world laid out for us. The private schools are a black box and the point of this paper is to really point out how big that black box is.”
Justin tells me that parents are more involved in private schools because their money goes towards schools fees. This element of bottom-up accountability, which is typically absent in public schools, could contribute to the success of private schools in Kenya, he says.
Another advantage of private schools is simply that they have found a cheap technology for teaching -- employing teachers on short-term contracts at salaries far below civil service wages.
Can Government Learn From The Private Sector?
Justin and I discuss a forthcoming working paper which looks at whether the Kenyan government can adopt this technology of contract teachers and successfully scale it up nationwide in government schools.
Justin and his coauthors organized a randomized controlled trial (RCT) in Kenya which draws on prior work by Esther Duflo, Michael Kramer, and Pascaline Dupas. These earlier studies, also in Kenya, showed employing contract teachers in public schools had a statistically significant effect on improving test scores. Based on these findings, the Ministry of Education agreed to scale up the intervention across the country. But in their new paper, Justin and his co-authors have found it matters a lot which entity is responsible for running the program. When the contract teachers are hired by an international NGO (as was also the case in the earlier pilots) the impact on test scores is significant and positive. But when the same intervention is executed by the Ministry of Education, there is no impact on test scores.
“The question of why is really difficult,” explains Justin. “There are a number of hypotheses on the table that are mostly related to the government’s ability to implement the programs in the districts.”
I end the Wonkcast by pushing Justin to discuss whether there are broader lessons from the new study. If scaling-up an intervention proven effective in an RCT means shifting from an NGO to a much larger government bureaucracy, can policymakers assume that a proven intervention will still work?
“There are some frightening implications here,” says Justin. The problem, he says, is not with RCTs, which are rightly considered the gold standard for evaluation. But as policy makers look to turn RCTs into national programs, more attention needs to be paid to the potential differences in the implementing organizations.
I’d like to thank Alexandra Gordon for serving as producer and recording engineer, and for helping to draft this post.
Education Links is a periodic summary of relevant links from RISE (Research on Improving Systems of Education), CGD’s initiative on education reform in the developing world.
Did we reach the 2015 global education goals? The UNESCO Education for All Global Monitoring Report just launched their final 2015 report (complete with slick data viz and video). There was acceleration in progress after 2000, but still some countries have a way to go, and we still don’t know enough about what kids are actually learning.
Less good news from Tanzania, where, as Justin highlights, a new data law would criminalize publishing stats that are not endorsed by the National Bureau of Statistics (including those Uwezo surveys highlighting the learning crisis).
In the US, where standardized tests are increasingly being used for teacher and school accountability, 11 teachers and educators in Atlanta have been convicted of racketeering for supporting cheating on a test — a sentence with up to 20 years in prison.
Looking at the outcomes of education, while it might be able to raise average earnings, education can't cut income inequality by much, at least in the United States, because inequality is being driven by the top 1 percent.
The Global Partnership for Education (GPE) is accused of having "not substantively engaged with the issue of private education” because it is too controversial (unlike some of us, Justin).
Finally, for those of you on twitter, we thought we’d share our lists of interesting education policy and research accounts to follow — Amanda’s is here and Lee’s is here.
CGD has just adopted a policy that I believe will improve the quality and usefulness of our work. We have decided to become more transparent. Henceforth, the presumption will be that when authors post publications on cgdev.org that involves quantitative analysis, they will also post the data and computer code needed to fully reproduce their results. That way, any visitor to the web site will in principle be able to check our work. (Not that we never shared data before.)
To quote from the policy (on which, comments welcome):
CGD analyses should be acts of social science. By some definitions, a sine qua non of science is replicability. The responsibility for replicability is especially great for research that aims to influence policy and ultimately affect the lives of the poor. Bruce McCullough and Ross McKitrick put it well in their report, Check the Numbers: The Case for Due Diligence in Policy Formation:
When a piece of academic research takes on a public role, such as becoming the basis for public policy decisions, practices that obstruct independent replication, such as refusal to disclose data, or the concealment of details about computational methods, prevent the proper functioning of the scientific process and can lead to poor public decision making.
In fact, transparency has many benefits:
It makes analysis more credible.
It makes CGD more credible when it calls on other organizations, such as aid agencies, to be transparent.
Data and code are additional content, appreciated by certain audiences.
It increases citation of CGD publications---by people using associated data sets.
It curates, saving work that otherwise tends to get lost as the staff turns over.
Preparing code and data for public sharing improves the quality of research: researchers find bugs.
In the short term, CGD’s leadership in transparency will differentiate it from its peers. In the long term (one hopes), CGD’s leadership will raise standards elsewhere.
For me, the most interesting point that emerged from CGD's internal discussions about this policy came from my colleague Justin Sandefur. Sometimes data sets are obtained after lengthy and delicate negotiations with officials in governments or private companies, on the condition of confidentiality. To post the raw data publicly would burn many bridges. Perhaps more importantly, promptly sharing data sets assembled at great cost would give other researchers, rivals in pursuit of publication, a free ride. They would jump on the opportunity to generate papers from others' data. And when the benefits of data collection for the collector go down, then less data will be collected. In these cases, perhaps the processed data behind a given paper, that actually subject to statistical analysis, can still be shared promptly, with the raw data held back for a year or so. At any rate, the point stands that there can be real trade-offs in choosing transparency, and sometimes the right choice is to be less than fully transparent. For this reason, CGD's new policy is flexible. We will be testing the frontiers of transparency in the months to come, and invite you to watch us closely.
Personally, the policy resonates with my experiences attempting to reproduce influential studies of impact of such things as foreign aid, financial sector expansion, and microcredit. To me, it always felt important to replicate these studies in order to examine their methods closely and reach my own conclusions about interpretation. Some of the authors of the studies I examined shared their data and code fully enough that replication was easy. In other cases, reconstruction was harder.
In the case of the microcredit work, my coauthor Jonathan Morduch and I posted all the data and programs behind our attempted replications of the original studies. This eventually allowed Mark Pitt, one author of the original studies, to spot some discrepancies in our replications. This highly public revelation was not particularly pleasant for us, but it was healthy, and it served the cause of understanding the evidence on the impacts of microcredit. (For us anyway, it did not affect our conclusions and in fact strengthened them by improving our match to the original studies.)
Fundamentally, then, the new data and code transparency policy is about putting the pursuit of truth first. We believe that this step is both right in itself and strategically smart. In statistical analysis, as in software, bugs are the norm. So placing more of CGD's work in the public domain will inevitably expose mistakes. That can be a daunting prospect for an organization that prizes its reputation for high-quality analysis. But transparency serves the public good. And serving the public good is what CGD, as a charity, should do. Moreover, the success of open source projects such as Wikipedia and Android reassures us that doing the right thing is wise. The flip side of catching more mistakes is better work. And that should lead to greater impact.
More about CGD's Research Data Disclosure policy can be found here.
One of the biggest experiments in development economics is about to begin on Honduras's Northern Coast. Honduras has altered its constitution to open the way to ceding a large tract of land to build a new "Special Economic Zone", modeled on NYU economist Paul Romer's idea of charter cities -- new cities, built up from scratch, where first-world institutions and third-world immigrants can meet and do business.
CGD has close ties to this idea. Romer is a CGD non-resident fellow, and CGD president Nancy Birdsall and senior fellow Michael Clemens are both associated with the Transparency Commission that will oversee the Honduran experiment. CGD does not take institutional positions though, and as anyone who's visited an internal seminar or staff lunch knows, there's room for vigorous debate.
Back in April, the members of the Transparency Commission met here in DC, and over lunch CGD staff had a chance to hear them explain their ideas for the city and ask questions. Since then, there's been an active debate in the hallways here about the charter city model. We want to take that debate into the public domain. To kick things off, here are three questions that we think proponents of the Honduran initiative need to grapple with.
Democracy: Is the charter city model at odds with principles of accountability and democracy?
Time inconsistency: Can Honduras credibly commit to investors that the rules of the zone won't change?
History and context: Is it realistic to talk about creating an institutional blank slate in the Honduran jungle?
Our musings on these questions got a bit long for the blog, so we've posted them as a CGD essay where we explain some of our concerns in more detail and offer a couple of ideas to address them. Read on...