Using the data we have: Tracking Foundational Learning for Better Advocacy and Action

19 Sep 2025
Image:  Association for the Development of Education in Africa

Image:  Association for the Development of Education in Africa

19 Sep 2025

In a continent where many children complete primary school without acquiring basic literacy and numeracy skills, effective tracking of foundational learning continues to be an urgent necessity. As African nations strive to meet the ambitions of the Sustainable Development Goals (SDGs) and the African Union’s Continental Education Strategy for Africa (CESA) 2026–2035, the demand for actionable and credible data on learning has never been greater.

Despite increased political commitment to improving foundational learning on the continent, assessment teams face a serious dilemma. While assessments are increasing in number and complexity, many are unsustainable. In addition, important questions remain: Do these assessments actually inform education policy decisions? Are they helpful in increasing/improving foundational learning? More importantly, are they building local skills or making governments reliant on external expertise?

Comparing learning outcomes across assessments is difficult, as different assessments:

  1. Do not measure the same things: all assessments do not measure the same skills. Some prioritize measuring content knowledge, while others focus on skills and competencies.
  2. Are not comparable over time: many national assessments are not designed to be psychometrically comparable over time.
  3. Are not comparable between countries: Different countries’ assessments test different skills at different grades. It is challenging to learn or benchmark across countries because difficulty levels vary significantly.

Systematic consultation and technical work have been ongoing for some time to harmonize assessment design and administration, and to make assessment data comparable. This blog provides an update on two innovative options that help make the most of existing data in the short-term to answer simple questions such as: “Is my country’s learning outcomes improving?” or “How does this country fare vs. others in the same region?”

These approaches were presented and discussed at a recent workshop in Nairobi, co-convened by the Association for the Development of Education in Africa (ADEA), Human Capital Africa, and the Foundational Literacy and Numeracy Hub (FLN Hub), and supported by the Gates Foundation. Together, these approaches explore how to use existing data more effectively to track progress at two critical stages of primary education: the early grades and the end of primary school. We believe these innovations are technically feasible and, crucially, have the potential to catalyze change, strengthen advocacy, and secure political commitment for foundational learning.

Why innovation in learning measurement matters

Official education statistics often mask the actual state of learning. Many assessments fail to provide comparable or timely data. Tracking progress becomes even more difficult in low-performing systems, where the majority of children may score below the minimum proficiency thresholds. Compounding this, many assessments do not adequately define ‘minimum proficiency’, making interpretation beyond technical audiences difficult. This limits our collective ability to track learning and advocate effectively for foundational learning reforms.

Yet there is hope. Several recent innovations show promise in addressing these gaps, particularly for advocacy. These innovations do not seek to replace long-term investments in high-quality national assessments but rather offer a complementary pathway for countries to “do better with the data they have” and communicate more clearly about whether children in schools are learning and the learning gaps.

Harmonizing the patchwork: a new approach to harmonizing proficiency data

During the workshop, Martin Gustafsson from Stellenbosch University presented innovative work on a hybrid harmonization method that aggregates proficiency data from a wide range of assessments to produce a more coherent picture of learning at the end of primary school across Africa. This work updates and expands upon previous efforts, such as those by Angrist et al. (2021) and the World Bank and UNESCO’s Institute for Statistics (UIS), by applying judgment-informed adjustments to align data across different programs and grades.
 
The approach includes:

  • Using published assessment statistics even when underlying microdata are not available
  • Adjusting for differences in test difficulty and grade level
  • Focusing particularly on the zero-skill end of distributions, where comparability is stronger

This methodology provides harmonized learning proficiency statistics for 97% of the African continent’s school-aged population, offering valuable insights such as:

  • The surprisingly strong predictive power of proficiency at age 7 for later outcomes
  • The wide disparities in early learning outcomes across countries, differences that are often larger than the gains achieved over the entire primary cycle
  • The sobering reality that in many systems, out-of-school children may not perform significantly better than their in-school peers

What makes this approach powerful is its practical utility. It enables governments, funders, and researchers to understand and communicate relative performance, identify progress (or lack thereof), and update learning data with minimal delay. It also helps circumvent the political and statistical challenges that sometimes accompany more rigid or technical harmonization methods.

Yet it also has limitations. Some critics may question the subjective elements involved in adjusting or imputing data, while national policymakers may resist results that portray their systems unfavorably. However, in a context where delay equates to denying millions the opportunity to learn, taking practical and credible action now is better than waiting for perfect data tomorrow.

Measuring zero scores – a simple yet powerful indicator

Cally Ardington from the African Foundational Learning Data Hub (AFLEARN) presented a feasibility study exploring the use of “zero scores” (children unable to read a single word) as an early indicator of foundational learning. This indicator complements the harmonized end-of-primary proficiency statistics, providing a sensitive early-stage signal of whether systems are building literacy foundations.

The strength of this approach lies in its simplicity and communicability. While minimum proficiency thresholds can be opaque for non-specialists, the inability to read a single word is immediately understandable to parents, policymakers, and the public. The task and measure are basic enough to avoid language and curriculum distortions and avoid the methodological challenges of scaling or mapping into minimum proficiency levels.

Measuring zero scores enables the use of a broader range of assessments since zero scores are arguably more comparable across assessments than averages. The following data sources already capture or can easily capture the information needed to compute zero scores:

Relatively minor adjustments to existing tools – such as refining how practice sentence results are recorded in MICS - would facilitate widespread coverage for this indicator. There are technical adjustments to be made, for example, aligning assessments conducted by age (e.g., MICS) with those by grade (e.g., PASEC), and ensuring comparability across contexts. This is feasible. This approach has the potential to offer an indicator that can provide timely, publicly communicable insights into the state of foundational learning and support efforts to reach those furthest behind.

From data to advocacy: making tracking learning actionable

Reflecting on these approaches, we might ask, ‘Why are these innovations important?’ The answer is because we cannot improve what we do not measure, and we cannot generate political will without clear, compelling evidence that captures attention.
 
Both the approaches mentioned above meet the critical criteria for effective advocacy:

  • They are measurable using existing tools or minor modifications.
  • They are understandable to ministers, journalists, and the public.
  • They allow us to show progress over time, especially in low-performing systems.
  • They can be made public without triggering confusion or misinterpretation.

Importantly, both approaches resonate with the broader goals of CESA 2026–2035 and SDG 4. They help shift the focus from access alone to learning for all, and they make it possible to track whether investments, no matter how modest, are delivering real improvements for learners.

Conclusion: a call to action for African education stakeholders

Innovation in education is not just about technology or new pedagogies; it is also about rethinking how we view and utilize data. In the face of tight budgets and persistent learning crises, we must adopt smart, scalable ways to monitor progress and advocate for change.

These innovations show that practical solutions are within reach. We now have tools that can make learning visible, even in the most challenging situations.

As education stakeholders across Africa prepare for the next round of CESA implementation and SDG reporting, we urge governments, partners, and funders to support the above innovations. By investing in credible, actionable data, we can ensure that foundational learning is not only a shared aspiration but a measurable, trackable, and ultimately achievable goal for every child on the continent.