BLOG POST

How Do We Improve Learning at Scale?

It’s well known that children in low- and middle-income countries are not learning enough, with half of children unable to read and understand a simple text by age 10. So, what can we do to change things?

There’s a growing body of evidence about approaches that have successfully improved learning (see here for example) but these are rarely taken to scale or have more limited impacts when they are replicated or scaled. With a bleak picture on learning outcomes globally and the clock ticking on SDG4, it’s hard to know what can be done to move the needle and to get more children acquiring basic skills—in large numbers—across the world.

A new report sheds light on these questions by identifying concrete ways that existing programs have not only improved reading outcomes, but improved them at scale. RTI International’s interim Learning at Scale report describes eight large-scale programs which show an impact on basic reading skills and investigates what made them successful (the selected programs are listed in the table below).

To be considered for inclusion in the study, programs had to meet certain criteria:

  • Effectiveness: Evidence of causal impact at scale or causal impact at pilot with evidence of effective scale-up; local demand for the program
  • Scale: Operating in most/all schools in at least two administrative divisions; at least 500 schools
  • Level of schooling: Lower primary, upper primary, and/or secondary school
  • Subject: Includes a literacy component (may include other subjects as well)
  • Geography: LMICs
  • Type of program: Aims to improve classroom teachers’ effectiveness
  • Data availability: Impact evaluation data available for analysis
  • Timeframe: Active through 2019
  • Sector: Public, private or civil society
  • Access: Key personnel and stakeholders available for interviews; schools available for site visits (in high and low-performing areas)

Table 1. Selected programs for inclusion in Learning at Scale

Program Country Lead implementer Scale
Scaling-up Early Reading Intervention (SERI) India Room to Read 2,662 schools in four states
Education Quality Improvement Program in Tanzania (EQUIP-T) Tanzania Cambridge Education/Mott MacDonald 5,100+ schools in nine regions (63 districts)
Partnership for Education: Learning (Ghana Learning) Ghana FHI 360 7,200+ schools in 100 districts
Tusome Early Grade Reading Activity (Tusome) Kenya RTI International All 24,000+ primary schools
Pakistan Reading Project (PRP) Pakistan International Rescue Committee (IRC) Seven provinces (~24,000 schools)
Read India India Pratham 22,173 schools in state of Karnataka*
Lecture Pour Tous Senegal Chemonics International All 4,000+ schools in six regions
Northern Education Initiative Plus (NEI Plus) Nigeria Creative Associates 7,000-8,000 schools (10 districts per state)
*Model is in ~250,000 schools nationwide

After an exhaustive search for programs that met these criteria, the RTI team selected these eight programs, which all met the criteria for “effectiveness”—defined more specifically as having been rigorously evaluated with an effect size of at least .15 standard deviations and a meaningful impact on reading ability—and exceeded the criteria for “scale.” Programs were required to have reached 500 schools, but each of these eight reached between 3,000 and 25,000 schools.

The Learning at Scale study set out to determine: (1) the kinds of instructional practices and classroom ingredients that lead to improved learning and (2) the system-level support required to effectively reach scale. Methods included classroom observations and student reading assessments, as well as interviews with teachers, head teachers, coaches, district officials, central ministry officials, and program staff. Due to COVID-19 delays, primary data collection from only three of the eight programs was included in the interim report. However, combining these data with high-level findings from program document reviews and interviews with program staff, starts to paint a picture of how programs improved learning outcomes at scale.

In late November, CGD and RTI held an event to discuss the report’s findings and convene practitioners from several of the programs who shared insights about their approaches and the keys to their programs’ success. Here’s our summary version of the big takeaways so far:

What worked in the classroom

Overall, there is evidence of multiple pathways to success, with emerging commonalities across programs that are worth considering for future early reading program implementation. 

Having sufficient time to spend on teaching the building blocks of reading is essential

All three programs that we have classroom observation data for showed that teachers were spending the largest amount of time in their reading lessons explicitly teaching reading. At the event, Nurudeen Lawal from the NEI Plus program in Nigeria shared that the program doubled the time spent on the task of reading compared to what was done before, and other programs similarly increased the amount of instructional time available each week for reading. Teachers were able to focus on basic reading skills (typically including a phonics-based approach) and students engaged more regularly with improved reading materials.

Teachers need supportive environments in which to practice the new skills that they are expected to employ in the classroom

Successful programs encouraged teachers to practice their newly acquired instructional approaches through decentralized trainings (with a large focus on modeling and practice, in place of more discussion-based methods). They also featured through positive, collaborative and respectful coaching and community of practice sessions. All of the program representatives at the event emphasized that support to teachers—in the forms of mentoring, coaching, and continuous monitoring to adapt instructional practices as needed—were essential to their programs’ success. 

Change is difficult but structured guidance (for teachers, students and coaches) can ease the transition for new expectations

 Six of the eight programs in the study provided teachers with structured teachers’ guides, including lesson plans, which teachers noted was essential to support their daily pedagogical decision-making. These programs also used a direct instructional approach with a gradual release model (i.e., “I do, we do, you do”) throughout their lessons. Coaches used structured tools to guide classroom observations and instructional discussions during their regular monitoring sessions. Betty Temeng Mensah-Bonsu from the Ghana Learning program highlighted systematic ways of teaching reading as well as locally developed reading materials that helped children to enjoy reading.

Ultimately, much of the evidence from these programs pointed toward a focus on improved implementation of well-established practices, as opposed to significant innovations and improved ideas for what it means to teach reading.

What worked at the system level

Programs aligned priorities with government plans and policies

Ministry officials regularly cited the importance of aligning program instructional changes with existing government plans in order to ensure buy-in at the system level. Programs also ensured regular engagement with the system by consulting with government counterparts and including them in decision-making during the planning stages, as well as throughout implementation. For example, Devyani Pershad from the Read India (Pratham) program discussed the use of data to clarify reading and math levels for senior level officials, and engaging them throughout the program cycles. Uptake was strengthened by evidence of the need for the program’s instructional changes and expectations were communicated down the system through ministry staff.

Regular program and government monitoring reinforces focus of the program

These successful programs made a point of working with and through government systems for program monitoring. Although much of the program monitoring was ultimately overseen by programs themselves, data were typically collected by government staff and results were regularly shared with government officials at various levels for learning and adapting throughout the program cycles.

System change won’t happen by chance; strategic and intentional transfer of responsibility from implementing partners to government counterparts is essential

All programs provided system-strengthening opportunities for education ministry staff and used them as a starting point for transferring ownership of program components. Several programs focused on doing this throughout the life of the program (e.g., the SERI India program’s “we do, you do” approach to transfer of responsibility), as opposed to handing over program components at the end of a program. Institutionalization also tended to be more successful for program components that built off existing systems, such as the use of new curriculum.

What we don’t know yet

What happens when programs are handed over to governments?

Scaling in the short term while maintaining positive outcomes is an accomplishment, but sustainability remains elusive. As discussant Laura Savage pointed out during the event, so far the Learning at Scale programs have given us hypotheses or indications of what might make programs sustainable, but there’s no evidence of this yet.

A striking fact about the eight selected programs is that they are donor funded (which made programs more likely to meet the criteria of measurable effectiveness—see below). As such, a question came up repeatedly during the event about what will happen when the donor funding runs out. Moreover, the study identifies characteristics that made programs successful but doesn’t discuss the costs of these programs and whether they could be financially sustainable for governments.

Funding is not the only concern, however. Even if funding were made available to continue programming, will governments have sufficient capacity to fully take over program implementation? Additionally, is there an aspect of programmatic success that comes from the drive of an external program which may lose steam once handed back over to an overburdened government system with significant competing priorities?

Are programs replicable?

No one at the event implied that program features could be ‘cut and paste’ from one context to another with expectations of getting the same results. What has made these programs scalable doesn’t necessarily make them replicable, and making interventions appropriate to particular contexts will be necessary. For example, while structured teachers’ guides or coaching models were deemed as integral components of success for many of these programs, there has been no shortage of programs using these same approaches with far less success. Therefore, simply using these components alone does not guarantee success.

But at the same time, these programs have proven to be effective at a large scale; as Ben Piper, one of the lead report authors said, this makes these approaches some of the best options we have for dealing with poor learning outcomes. Knowing how learning can be improved is even more important now in the context of COVID-19 learning loss. While it has limitations, the study contains a wealth of information and ideas about successful program features that can be tried and adapted in different contexts.

What else could work?

The study included only programs that have been evaluated to demonstrate effectiveness in improving learning outcomes, but there may be effective programs that weren’t included due to a lack of available, rigorous impact data. One of the study’s limitations is the absence of any identified programs that were run entirely by governments, perhaps as result of the fact that donor programs are more likely to be evaluated. Additionally, six of the eight programs have been funded by USAID and, as the report states, programs funded by other major donors—like the World Bank, Global Partnership for Education (GPE), and UNICEF—were typically not included in the study because of the smaller scale of their interventions, a lack of rigorous impact evaluation data, or because the programs did not show substantial impacts on learning.

One recommendation of the report is that more programs must be implemented at scale and use designs that will allow for measuring impact. Discussant Laura Savage took this further to say that more funding needs to go into data, especially national level data, to answer questions about what makes programs effective.

The Learning at Scale study contributes to a small but crucial evidence base about how learning outcomes can be improved at a large scale. Stay tuned for much more on Learning at Scale in 2022, including additional data from primary data collections, briefs and webinars highlighting program successes across three research areas (i.e., instructional practice, instructional support and system support), and a final report incorporating findings across all eight programs. 

Thanks to Matthew Jukes, Joe DeStefano, and Justin Sandefur for helpful comments.

Disclaimer

CGD blog posts reflect the views of the authors, drawing on prior research and experience in their areas of expertise. CGD is a nonpartisan, independent organization and does not take institutional positions.


Image credit for social media/web: Adobe Stock