BLOG POST

Nine Cents Well Spent: Maximizing PASEC's Value for African Education

Neuf centimes qui comptent : comment maximiser la valeur du PASEC

The PASEC education assessment generates benefits that far exceed its costs through four channels: (1) triggering policy reform, (2) guiding education spending, (3) enabling research, and (4) informing donor programmes. But arguing that something is worth funding is different from arguing that it is well run. PASEC faces real challenges: rising costs, delayed reports, uneven data access, and a growing gap between what it tries to do and what it can realistically deliver. If the programme is to survive the withdrawal of Agence Française de Développement (AFD)’s financial support and secure new funding, it needs to show that it can evolve.

We made the economic case for PASEC in our first blog post, so this one explores how to maximise its value across each channel. Our ideas draw on comparisons with other assessment programmes, on conversations with stakeholders across governments and donor agencies, and on practical experience working with the programme. None require reinventing PASEC. Most require making deliberate choices about what the programme should and should not be, and how to make its core missions as effective as possible.

1. Maximize the PASEC shock

The reform shock is PASEC's most powerful channel. When results reveal uncomfortable truths about an education system, they can trigger a political response that leads to large-scale reform. Following Brazil's last-place finish in the 2000 PISA round and Peru’s in 2012, shock-inspired reforms in both countries produced impressive learning gains over the following decade. The best examples in Africa to date are Niger, following the 2014 PASEC round, and Côte d'Ivoire, following the 2019 PASEC round.

The shock mechanism depends on several conditions that PASEC can influence:

Speed matters most. Education ministers often rotate in Francophone Africa (and Latin America). Policy windows open and close quickly. A PASEC report that arrives two or three years after data collection may find that the reformist minister who commissioned it has moved on and that the political opening has passed. PISA publishes its main results roughly 11 months after the end of data collection. TIMSS and PIRLS follow a similar timeline. PASEC should aim for a two-track model: a first set of key findings, presented as short policy briefs with clear visuals, released within six to twelve months of data collection. The full technical reports, which require more time for psychometric analysis and quality assurance, would follow later. The point is to ensure that results enter the policy conversation when they are most relevant. A media strategy like PISA’s to blanket the airwaves with test results, commentary, and insights from key education policy figures and researchers as soon as the first results are available would have a big payoff.

Format matters too. PASEC’s reports are comprehensive, technically rigorous documents. They are also long and largely inaccessible to the policymakers and journalists who are the programme's most important audience. A communications strategy that produces targeted, audience-specific outputs (data visualisations, country-level summaries, comparison tools) would multiply the programme's impact at relatively low cost. Brazil's IDEB translates complex assessment data into a single index for every school and municipality in the country. PISA's country notes are two-page documents that ministers can read in the car on the way to a press conference. PASEC does not need to copy these models, but it needs to take the communications challenge as seriously as it takes the measurement challenge.

Cross-country comparison is the trigger. There is evidence from PISA and elsewhere that cross-national assessments are politically more potent than domestic ones precisely because they expose performance relative to peers. This is what causes the shock. PASEC should lean into this comparative dimension: presenting results in ways that make cross-country patterns immediately visible, highlighting which countries are improving and which are falling behind, and producing materials that national media can pick up without needing a statistics degree to interpret.

Post-publication follow-up closes the loop. A PASEC shock is only valuable if it leads somewhere. The programme has experimented with roadmaps in some countries, translating results into concrete policy actions with follow-up mechanisms. Scaling this approach, perhaps through structured partnerships with ministries and development partners in the months following publication, would increase the likelihood that the results lead to reform rather than sit on a shelf. Having a panel of experts lined up to make the case for specific reforms implemented in the countries making the most progress would help inspire and guide reformist ministers.

2. Make PASEC data more useful for planners

Education planners in PASEC countries allocate roughly $14.5 billion per year across competing priorities. PASEC data help them make better decisions by providing diagnostic information on where learning is and is not happening. But the utility of the data depends on how it is packaged and delivered.

Subnational disaggregation is what planners need. National averages tell a minister whether the system is improving overall, but don’t tell a regional director where to target resources. Several PASEC countries have requested larger sample sizes to allow for subnational breakdowns, and this is where PASEC data become directly actionable for planning. The trade-off is cost: larger sample sizes are more expensive. PASEC needs to be strategic about where subnational disaggregation adds the most value, rather than uniformly expanding sample sizes. In countries where regional disparities are large and well-documented, such as Nigeria, even modest disaggregation can sharpen resource allocation. In smaller or more homogeneous systems, national estimates may be sufficient.

Sampling strategies deserve fresh scrutiny. In some countries, sample sizes have grown over successive PASEC cycles, partly driven by disaggregation requests and partly by high intra-class correlations (the degree to which students within the same school perform similarly), reducing the statistical efficiency of school-based sampling. Understanding why these correlations are high and whether they reflect real features of the education system or artefacts of test design could lead to more efficient sampling without sacrificing precision.

Availability and accessibility of results matter. Sector planning teams and district education officers rarely read 300-page technical reports. The PASEC programme could produce country-specific planning briefs presented in user-friendly, accessible formats: tables showing regional performance, trend data, and correlations with observable inputs like teacher qualifications or textbook availability. These low-cost outputs would make the same data more useful.

Interoperability between PASEC and national data systems would multiply the value. Education Management Information Systems (EMIS) across the region are often focused on counting inputs rather than measuring outcomes. PASEC learning data, linked to EMIS administrative records, would create a much richer picture for planners. This is technically feasible and has been done in other contexts (Brazil's Censo Escolar is linked to SAEB assessment data), but requires deliberate investment in interoperability.

3. Spur more research

For many questions, PASEC is the only available dataset on education quality in Francophone Africa. This region produces a fraction of the research output of comparable-sized Anglophone countries. At least 20 published studies have used PASEC 2014 or 2019 microdata, and the pace is accelerating. But the availability of data remains uneven, which limits the research value PASEC generates.

The principle is simple: Publicly funded PASEC data are a public good. Researchers report delays in obtaining datasets, inconsistent documentation, and restricted access to some variables. Datasets should be made available within a defined timeline after quality checks, along with clear documentation, codebooks, and reproducibility standards. PISA makes all microdata available for download shortly after the results are disseminated. TIMSS and PIRLS do the same through the IEA's data repository. PASEC should follow this model.

Better documentation would lower the barrier to entry. Many potential PASEC data users, particularly researchers based in Francophone Africa, face practical obstacles: unclear variable definitions, missing codebooks, and inconsistent coding across waves. A modest investment in data curation—standardised variable names, trilingual documentation, harmonised datasets across cycles—would significantly expand the research user base.

Partnerships with regional universities could build a research ecosystem. PASEC data are currently used mostly by researchers at institutions in Europe and North America. Building relationships with universities in Dakar, Abidjan, Ouagadougou, and Kinshasa through data workshops, joint research programmes, or thesis partnerships would help develop a Francophone African community of researchers working on education evidence. This serves both the research channel and the broader capacity strengthening mission.

A research programme would focus attention on the highest-value questions. Rather than waiting for external researchers to discover the data, the PASEC programme could identify priority research questions emerging from each cycle and actively commission or encourage studies on those topics. The new African Fellows program, launched by Stanford professor Eric Hanushek and the Yidan Foundation, is a natural for this kind of partnership. The program can link sub-Saharan Africa’s most promising young education economists with high-priority research topics and built-in funding. Some international assessments do this through competitive research grants or partnerships with academic networks. The cost is small relative to the visibility and policy relevance it generates.

4. Serve donors better

Collectively, international donors channel roughly $1.8 billion per year to education in PASEC countries. PASEC data are already embedded in World Bank project appraisals, Global Partnership for Education (GPE) country programme documents, and AFD project frameworks. Strengthening this channel means using PASEC data to inform programme design, monitoring, and evaluation.

Timeliness is critical for donors, too. Donor project cycles are typically four to five years. If PASEC results arrive too late in the project cycle, they cannot inform mid-term reviews or the design of the next phases. The faster publication schedule discussed above would serve donors as much as it serves governments.

Cost transparency strengthens the case for funding. The PASEC secretariat centrally manages about 63 percent of the total programme costs. Greater transparency about how these funds are allocated, including the share spent on external technical partners for psychometrics and test development, would strengthen accountability and help build the case for continued donor support. Donors are more likely to fund a programme that can show exactly what each dollar buys.

Building in-house technical capacity reduces long-term costs. A significant share of the PASEC’s central budget goes to external contractors for psychometric analysis and test development. Over time, strengthening this capacity within the secretariat would reduce dependency on expensive external support, lower costs, and give donors confidence in the programme's sustainability.

Alignment with donor results frameworks. PASEC could be more proactive in working with donors to ensure indicators, benchmarks, and reporting timelines align with the results frameworks of major education programmes in the region. This does not mean tailoring the assessment to donor preferences, but it does mean making it easier for donors to use PASEC data in the formats they need.

The final question: what should PASEC be?

All four channels depend on a question that the PASEC has not fully answered: what is the programme's core mission?

PASEC is currently several things at once: a cross-national assessment, a capacity strengthening programme for national evaluation teams, a substitute for national assessments in countries that lack them, and a vehicle for SDG monitoring. Each role is valuable. The problem is that pursuing all of them simultaneously at its current scale and level of financing creates tensions that are becoming harder to manage.

If PASEC's primary mission is to trigger policy reform and inform public debate (channel 1), then speed and communications matter more than comprehensiveness. If the primary mission is to serve as a planning tool (channel 2), then large sample sizes and subnational disaggregation make sense, but this drives up costs. If enabling research is a priority (channel 3), then open data and documentation are essential investments. If serving donors is central (channel 4), then alignment with donor timelines and reporting formats matters most.

These missions are not mutually exclusive, but PASEC cannot pursue all of them at the same level of ambition with its current resources. The most successful international assessments have stayed focused. PIRLS has measured reading at one grade level for over two decades, resisting pressure to expand. Several years ago, PISA tried to add a household application to catch students who were no longer in secondary school, but retreated from this after finding the costs and administrative complexity too great; therefore, it restored its focus to its core mission—assessing the reading, math, and science skills of 15-year-old students.

PASEC, by contrast, has expanded from one grade level and two subjects in 2014 to three grade levels and six subject-grade combinations in 2025, while also adding teacher assessments, parent surveys, and contextual questionnaires. Data on teacher quality and parental attitudes, involvement, and education spending are all extremely valuable, but this raises the cost of data collection and spreads the programme thin.

PASEC today has two clear areas of comparative advantage. First, it is the only measure of early grade literacy and numeracy that has statistical validity across countries and over time. Oral testing is inherently time-consuming and costly. Other available assessments cannot match the psychometric qualities and administration protocols that make PASEC’s second-grade oral tests valid for programme evaluations and global monitoring. In the absence of a clear commitment to PASEC’s expansion, the UN’s SDG Technical Committee dropped the measurement of grade 2 skills for global SDG monitoring in 2025. Given the donor focus on foundational skills and the large volume of funding at that level, there is a compelling case for building on PASEC’s capacity despite the high costs of oral testing.

The second area is PASEC’s growing country coverage. It has expanded steadily beyond Francophone Africa and now includes Lusophone Africa—Mozambique and São Tomé—and Anglophone Nigeria. Nigeria was included in the PASEC 2025 cycle, which will generate the first comparable learning data for 55 percent of sub-Saharan Africa’s children. Anyone looking for the most cost-effective way to expand learning measurement in Africa would be hard-pressed to find a better strategy than building on PASEC, with an immediate focus on getting as many countries in Anglophone Africa to join the 2028 cycle.

Articulating these and other priorities in dialogue with the multilateral banks, bilateral donors, and major foundations could lead PASEC to a clearer, more strategic identity that would enable better decisions about what to measure, how to report, how much to spend, and what to ask donors to fund. It would also make it easier to communicate the programme's value to both governments and funders.

The path forward

None of these reforms requires PASEC to become a different programme. They require it to become a more deliberate one: faster at delivering results, more open with its data, more strategic about communications, and with clearer international support for its mission.

The timing is urgent. AFD's and Swiss DDC’s withdrawal from secretariat funding creates a financing gap that will need to be filled. The World Bank, GPE, the Gates Foundation, bilateral supporters, and PASEC's member governments all have a stake in the programme's survival. But donors are more likely to step in if they see a programme that is reforming itself, can articulate what it delivers and at what cost, and has a credible plan to do more with what it has.

The economic case for PASEC is strong. At nine cents per child per year, PASEC is one of the most cost-effective investments available in African education. The question now is whether the programme and its stakeholders can act quickly enough to protect that investment and make it even more valuable.

The authors are grateful to Jack Rossiter, Luis Crouch, Abdullah Ferdous, and Clio Dinthilac for their ideas and suggestions.

DISCLAIMER & PERMISSIONS

CGD's publications reflect the views of the authors, drawing on prior research and experience in their areas of expertise. CGD is a nonpartisan, independent organization and does not take institutional positions. You may use and disseminate CGD's publications under these conditions.


Thumbnail image by: GPE/Rodrig Mbock/flickr