Building Sustainable Assessment Systems for Improved Learning in Africa

14 Aug 2025
Image:  Association for the Development of Education in Africa

Image:  Association for the Development of Education in Africa

14 Aug 2025

Millions of children in Africa are unable to acquire age-appropriate skills in reading and mathematics especially in their early years. This is known as learning poverty.1 African educational assessment systems, mainly comprise classroom assessments, national examinations, and large-scale assessments. Some large scale assessments that have been used across the African continent include regional assessments like the Southern and Eastern Africa Consortium for Monitoring Educational Quality (SEACMEQ), Programme d’Analyse des Systèmes Educatifs de la CONFEMEN (PASEC), PAL Network Citizen-led Assessments and international assessments like the Progress in International Reading Literacy Study (PIRLS) and PISA for Development.2 These large scale assessments play an important role in tracking progress on key objectives set by governments towards reversing learning poverty.

Despite increased political commitment to improving foundational learning on the continent, assessment teams face a serious dilemma. While assessments are increasing in number and complexity, many are unsustainable. In addition, important questions remain: Do these assessments actually inform education policy decisions? Are they helpful in increasing/improving foundational learning? More importantly, are they building local skills or making governments reliant on external expertise?

A recent joint research project by the Association for the Development of Education in Africa (ADEA) and the African Foundational Learning Data Hub (AFLEARN) at the University of Cape Town across multiple African countries, including Ghana, Kenya, Mozambique, Malawi, Rwanda, Somalia, and South Africa, reveals various challenges undermining the successful implementation of national assessments.

An unsustainable cycle of assessment dependency

African assessment bodies highlight five concerns that contribute to an unsustainable cycle of reliance on external actors:

  1. Financial constraints and prolonged uncertainty.
    Governments' lack of direct financial commitment means assessment teams still depend on external funding to implement local assessments. When clear budget lines are not incorporated into ministry planning, local authorities struggle to maintain assessment cycles beyond the period that donor funding is available. Funding gaps can have knock-on effects on program implementation, systems monitoring, and policy reform.

  2. External influence on the design of the assessment.
    International partners often have an outsized influence on the frequency and methodology of assessments compared to domestic priorities or capabilities. Many assessment teams end up using methods or instruments that are not aligned with their educational commitments or national strategies. This misalignment diminishes local ownership and jeopardizes the value of assessment data in promoting advancements in foundational learning.

  3. Gaps in technical capacity.
    Often, assessment teams reported a lack of technical knowledge in critical areas, including psychometrics and data analysis. Without these specialised skills, teams rely on outside consultants for important parts of the assessment, from test design to the analysis of results. This situation creates a knowledge gap, making technical processes inaccessible to local teams and reinforcing capacity constraints instead of resolving them.
     
  4. Approaches to capacity building that are misaligned.
    Programs meant to increase local capacity sometimes fall short of fulfilling assessment teams' requirements. For example, in the case of technical skills like data analysis, hands-on, in-person training is often more effective – yet virtual options are commonly offered, likely due to financial and logistical considerations. Donor-funded programs may also restrict the number of people who can access training, limiting opportunities to build broader institutional knowledge.
     
  5. Limited integration of assessment data into policy cycles.
    In many systems, assessment findings are not systematically fed into education policy decisions. Several teams noted a disconnect between those analysing assessment data and those responsible for curriculum reform, resource allocation, or teacher training. If there is no clear intention behind the data collection – no defined policy it aims to inform – the result is often well-executed research with little real-world impact. As one official explained, “Historically, we collected a lot of data but not a lot of data was used… it wasn’t really part of our policy making or decision making; we just saw it as interesting research… [now] we’ve locked in indicators into the strategic planning of the department so they become annual indicators.” It is only when assessments are explicitly linked to decision-making goals that they drive change in education quality.

However, within this challenging landscape, promising innovations are emerging that demonstrate how African education systems are developing more sustainable approaches to assessment.

Promising pathways to sustainable assessment systems

Many African education systems are developing innovative methods to improve the effectiveness of assessments in decision-making and promote local ownership.

  1. Interventions based on assessment data.
    Ghana is a good example of how assessment results can be used to identify the most vulnerable groups of learners and target appropriate interventions. The government uses data from national assessments as part of its Ghana Accountability for Learning Outcomes Project (GALOP) to pinpoint underperforming schools and offer them targeted assistance. As one official explained: "The data was used to classify these schools, and even the support system needed in those other schools."
     
    This approach shows how assessment data can contribute to solutions for improving learning outcomes beyond simply measuring issues.

  2. An increase in local partnerships.
    Some countries are responding to their capacity constraints by forming strategic alliances with local universities and organisations to create sustainable technical knowledge. South Africa is in the infancy stage of working with local universities to co-develop training programs that respond directly to system needs. By embedding expertise development within national institutions, they are laying the groundwork for sustainable capacity rooted in local partnerships. Malawi’s national examination board (MANEB) also provides a strong example of growing local collaboration. While MANEB leads the development and administration of exams, it works closely with curriculum experts, university lecturers, and school teachers through its technical and grading committees to ensure assessments are contextually appropriate and credible.
     
    These efforts reflect a shift towards home-grown solutions, where sustainable assessment systems are built through long-term investment in national institutions and local expertise.

  3. Increasing local ownership within existing structures.
    Kenya’s approach to assessment emphasises building internal leadership by assigning clear and consistent roles across both national and international studies. This structure fosters a sense of shared responsibility while ensuring continuity and capacity development within the national team. A Kenyan education official explained:  “There are five of us. The coordinator, one person in charge of all international learning assessments – PISA, AMPL, SEACMEQ. Another leads assessments for Grade 3 and below, covering the lower primary tier. One responsible for secondary-level assessments. Each of the five of us plays a different role in each of the studies. One will be the logistics manager, another the sampling manager across the different studies, even though they are in charge of a specific study. We do this so as to build capacity within one person so they are able to do the functions of sampling, data management, and logistical management.”
     
    This deliberate role distribution not only deepens technical capacity but also embeds ownership within the national team, making the system more resilient and less dependent on external actors.

  4. National ownership of external partnerships.
    Rwanda has set up clear processes to guarantee all outside assessment support complements national goals and develops local capacity. A Rwandan education official explained how they worked: "Whatever any development partner is trying to implement in the country, we make sure that all the activities fall into the national plans... You look at the plan, you select in which area you want to support. You support under our name. It's like we partner. And once you leave, you leave."
     
    By requiring all external support to align with national priorities and be delivered in partnership, Rwanda reinforces local ownership and safeguards the long-term sustainability of its assessment system.

  5. Strengthening coordination between assessment, curriculum, and teacher training.
    In South Africa, assessment results are being translated into actionable insights for the classroom through the use of curriculum-linked proficiency scales. Teachers receive feedback not just on learner performance levels, but also on the pedagogical support needed to move learners forward. As one South African official put it: “If you go and talk to any teacher [and say] we scored 320 on PIRLS, [that] doesn’t make any sense to them… so we’ve created proficiency scales which link back to the curriculum… when it comes to data utilisation we’ve locked in pedagogical activities that should accompany learners being located at this [320 score] level and the level of support that is needed.
     
    This model of partnership prioritises national ownership while still benefiting from external expertise and resources.
     

Moving forward: Recommendations for international organisations

Based on these experiences, several recommendations emerge for international organisations supporting assessment systems in Africa:

  1. Prioritise collaboration over imposition.
    As explained by a Somalian education official: "One-size-fits-all solution, without the consumer with the local context, can lead to ineffective outcomes. International organisation[s] should work with the local stakeholders to co-create a solution

  2. Invest in in-person, practical capacity building.
    Virtual training is ineffective for developing technical skills like psychometric analysis. As a Ghanaian official requested: "For international partners that support us...we would be glad if they introduced a lot of in-person, hands-on capacity-building workshops for the staff."

  3. Look beyond short-term programs to system improvement.
    As recommended by a Kenyan education official: "If they intend to have a sustainable way of doing national assessments or international assessments, let them embrace a systems approach, empower the people that really do the work... It will be easier to make provisions in the strategic plans and eventually into the budgets."

  4. Produce accessible and relevant assessment data.
    Assessment reports often remain on shelves without influencing policy discussions. As one official noted: "Don't make reports only for conferences and desks and wardrobes that we have for many books and reports, but talk about [it] in different forms, in different aspects. Break it down to the parent level to the learner level too, so that everyone sees what it is that they can do to improve."

  5. Strengthen continental strategies and oversight of African assessments.
    A continental approach – led by institutions like the African Union (AU), ADEA and the Association for Educational Assessments in Africa (AEAA) – could help pool resources for participation in international studies, coordinate regional assessment initiatives, and support cross-country technical capacity-building programs. This role could also be expanded to include strengthening African education systems’ ability to adopt systems thinking approaches – for example, through targeted training in psychometrics, assessment design, and data use. As an assessment director explained, "Poor countries…have to pay out of their pocket the full participation fees and administration costs for assessments, because we don't have a structure within the African Union equivalent to the European Commission."

Conclusion: Towards contextually responsive assessment systems

Creating sustainable assessment systems in Africa calls for overcoming the tension between local relevance and rigorous standards. The best strategies recognise that quality assessments have to be both technically sound and contextually relevant. International partners and African education authorities must cooperate and coordinate their efforts to develop assessment systems that are locally led, owned, affordable, and – most importantly – dependable. Using this approach will ensure assessments monitor progress and hold African governments accountable for their commitments.
 

Lire l'article complet.

------------------------------------------

  1. Beyond these large-scale international assessments, EGRA (Early Grade Reading Assessment) and EGMA (Early Grade Mathematics Assessment) have been widely adapted by national assessment teams across Africa to measure foundational skills. These tools help identify which skills learners have acquired and which ones they still need to master, and are often used to inform instruction and early interventions.