2026 Course: Applied Item Response Theory for Policy and Research
About the African Foundational Learning Data Hub
The African Foundational Learning (AFLEARN) Data Hub is dedicated to African foundational learning data, measurement, research, and capacity building. The hub’s mission is to enhance quality and capacity across the full data lifecycle — from collection to impact.
Course Description
This short course builds practical skills in applying Item Response Theory (IRT) to existing educational assessment data. Designed for participants who have completed Course 1 (IRT Literacy), the course focuses on selecting appropriate IRT models, fitting basic models in R, interpreting results accurately, and using IRT outputs responsibly in research and policy contexts. Emphasis is placed on applied decision-making, model assumptions, fairness, and clear communication of results.
This five-day, in-person course is designed for participants who work with existing large-scale education assessment datasets — such as PASEC, SACMEQ, TIMSS, PIRLS, LANA, and AMPL — and who wish to apply basic Item Response Theory (IRT) methods or interpret IRT results to address policy-relevant or research questions.
By the end of the course, participants will be better equipped to critically engage with IRT-based analyses, communicate results to non-technical audiences, and apply insights from large-scale assessment data to real-world policy and research contexts.
Who Should Apply?
This course is for researchers, analysts, and education professionals working with large-scale assessment data. Applicants must have completed the IRT Literacy Course or demonstrate a strong foundational understanding of Item Response Theory. Experience with quantitative education data is expected.
Course Objectives
By the end of the course, participants will be able to:
- Apply IRT as a measurement framework
Explain IRT as a probabilistic model linking latent traits to observed responses and situate it within applied measurement and evidence-building processes. - Assess data readiness for IRT analysis
Evaluate whether assessment datasets are suitable for IRT, considering construct definition, item design, dimensionality, and data quality. - Fit basic IRT models in R
Estimate and compare 1PL and 2PL models using existing data, and interpret estimation output and model fit. - Interpret item and person parameters
Interpret item difficulty and discrimination, person ability estimates, and uncertainty in applied educational contexts. - Evaluate assumptions and model fit
Identify common assumption violations, estimation issues, and their implications for interpretation and use. - Examine fairness and subgroup performance
Use IRT outputs to explore subgroup differences and potential differential item functioning (DIF), with attention to ethical and contextual considerations. - Translate IRT results into policy-relevant evidence
Communicate findings clearly and responsibly, articulating assumptions, limitations, and appropriate uses. - Recognize pathways for advanced IRT applications
Describe advanced applications (e.g., equating, linking, adaptive testing) and identify when further training or expert collaboration is needed.
Course Structure and Outline
Duration: 5 days
Format: Short lectures, live demonstrations in R, guided hands-on activities, and applied assignments using a common dataset.
Day 1: Orientation and Technical Setup
- Recap of key IRT concepts from Course 1
- Positioning IRT within the evidence cycle and applied decision-making
- Introduction to item difficulty and discrimination
- Demonstration of RStudio basics and IRT packages
- Assignment 1: Descriptive statistics and CTT analysis using provided R template
Day 2: Data Readiness and Coding Practice
- Guided support on R coding and interpretation of CTT results
- Overview of assessment data structures and practical IRT assumptions
- Evaluate data quality, construct alignment, and suitability for IRT
- Assignment 2: Written reflection on whether IRT is appropriate for a familiar dataset
Day 3: Model Selection and Estimation
- IRT model taxonomy (1PL vs 2PL) and model choice logic
- Demonstration of model estimation and fit assessment in R
- Select and fit multiple IRT models
- Compare model fit and parameter estimates
- Assignment 3: Model comparison and justification of selected IRT model
Day 4: Interpretation and Fairness
- Interpretation of item and person parameters
- Introduction to subgroup analysis and DIF
- Ethical considerations in applied IRT use
Day 5: Synthesis and Applied Use
- Principles for reporting IRT results for research and policy audiences
- Synthesis of the full applied IRT workflow
Lead Facilitator
Tamlyn Lahoud is a PhD candidate at the University of Georgia in the Quantitative Methodology program. She completed her MSc in Statistics at Rhodes University where she became interested in Item Response Theory. Her research applies and extends psychometric models to integrate multimodal data, offering deeper insight into students’ skills, misconceptions, and response behaviors. She also develops methodological innovations to strengthen model precision in small-sample and adaptive testing contexts. Her work is grounded in a commitment to develop data-driven tools and measurement approaches that promote equity, support evidence-based decision making, and enhance teaching and learning across diverse educational contexts.
How to apply
- Complete the application form by 1 April 2026.
- Spaces are limited, and successful applicants will be notified.
- For questions, contact us at datafirst@uct.ac.za