Skip to main content

Code Acts in Education: Counting Learning Losses

The idea that young people have ‘lost learning’ as a result of disruptions to their education during the Covid-19 pandemic has become accepted as common knowledge. ‘Learning loss’ is the subject of numerous large-scale studies, features prominently in the media, and is driving school ‘catch-up’ policies and funding schemes in many countries. Yet for all its traction, there seems less attention to the specific but varied ways that learning loss is calculated. Learning loss matters because it has been conceptualized and counted in particular ways as an urgent educational concern, and is animating public anxiety, commercial marketing, and political action.

Clearly educational disruptions will have affected young people in complex and highly differentiated ways. My interest here is not in negating the effects of being out of school or critiquing various recovery efforts. It’s to take a first run at examining learning loss as a concept, based on particular forms of measurement and quantification, that is now driving education policy strategies and school interventions in countries around the world. Three different ways of calculating learning loss stand out. First is the longer psychometric history of statistical learning loss research, second its commercialization by the testing industry, and third the reframing of learning loss through econometric forms of analysis by economists.

The measurement systems that enumerate learning loss are, in several cases, contradictory, contested, and incompatible with one another. ‘Learning loss’ may therefore be an incoherent concept, better understood as multiple ‘learning losses’ based on their own measuring systems.   

Psychometric set-ups

Learning loss research is usually traced back more than 40 years to the influential publication of Summer Learning and the Effects of Schooling by Barbara Heyns in 1978. The book reported on a major statistical study of the cognitive development of 3000 children while not in school over the summer, using the summer holiday as a ‘natural experimental situation’ for psychometric analysis. It found that children from lower socioeconomic groups tend to learn less during the summer, or even experience a measurable loss in achievement.

These initial findings have seemingly been confirmed by subsequent studies, which have generally supported two major conclusions: (1) the achievement gap by family SES traces substantially to unequal learning opportunities in children’s home and community environments; and (2) the experience of schooling tends to offset the unequalizing press of children’s out-of-school learning environments. Since the very beginning of learning loss studies, then, the emphasis has been on the deployment of psychometric tests of the cognitive development of children not in school, the lower achievement of low-SES students in particular, and the compensatory role that schools play in mitigating the unequalizing effects of low-SES family, home and community settings.

However, even researchers formerly sympathetic to the concept of learning loss have begun challenging some of these findings and their underlying methodologies.  In 2019, the learning loss researcher Paul T. von Hippell expressed serious doubt about the reliability and replicability of such studies. He identified serious flaws in learning loss tests, lack of replicability of classic findings, and considerable contradiction with other well-founded research on educational inequalities.

Perhaps most urgently, he noted that a significant change in psychometric test scoring methods—from paper and pen surveys to ‘a more computationally intensive method known as item response theory’ (IRT) in the mid-1980s—completely reversed the original findings of the early 1980s. With IRT, learning loss seemed to fade away. The original psychometric method ‘shaped classic findings on summer learning loss’, but the newer item-response theory method produced a very different ‘mental image of summer learning’.

Moreover, noted von Hippel, even modern tests using the same IRT method produced contradictory results. He reported on a comparison of the Measures of Academic Progress (MAP) test developed by the testing organization Northwest Evaluation Association (NWEA), and a test developed for the Early Childhood Longitudinal Study. The latter found that ‘summer learning loss is trivial’, but the NWEA MAP test reported that ‘summer learning loss is much more serious’. So learning loss, then, appears at least in part to be an artefact of the particular psychometric-set-up constructed to measure it, with results that appear contradictory. This is not just a historical problem with underdeveloped psychometric instruments, but persists in the computerized IRT systems that were deployed to measure learning loss as the Covid-19 pandemic set in during 2020.

Commercializing learning loss

Here it is important to note that NWEA is among the most visible of testing organizations producing data about learning loss during the pandemic. Even before the onset of Covid-19 disruptions, NWEA was using data on millions of US students who had taken a MAP Growth assessment to measure summer learning loss. Subsequently, the NWEA MAP Growth test has been a major source of data about learning loss in the US, alongside various assessments and meta-analyses from the likes of commercial testing companies Illuminate, Curriculum Associates, and Renaissance and the consultancy McKinsey and Company.

Peter Greene has called these tests ‘fake science’, arguing that ‘virtually all the numbers being used to “compute” learning loss are made up’. In part that is because the tests only measure reading and numeracy, so don’t count for anything else we might think as ‘learning’, and in part because the early-wave results were primarily projections based on recalculating past data from completely different pre-pandemic contexts. Despite their limitations as systems for measuring learning, the cumulative results of learning loss tests have led to widespread media coverage, parental alarm, and well-funded policy interventions. In the US, for example, states are spending approximately $6.5 billion addressing learning loss.

Learning loss results based on Renaissance Star reading and numeracy assessments for the Department for Education

 

In England, meanwhile, the Department for Education commissioned the commercial assessment company Renaissance Learning and the Education Policy Institute to produce a national study of learning loss. Utilizing data from reading and mathematics assessments of over a million pupils who took a Renaissance Star test in autumn 2020, the findings were then published by the Department for Education as an official government document. An update report in 2021, published on the same government webpage, linked the Renaissance Star results to the National Pupil Database. This arrangement exemplifies both the ways commercial testing companies have generated business from measuring learning loss, and their capacity to shape and inform government knowledge of the problem–as well as the persistent use of reading and numeracy results as proximal evidence of deficiencies in learning.

Moreover, learning loss has become a commercial opportunity not just for testing companies delivering the tests, but for the wider edtech and educational resources industry seeking to market learning ‘catch-up’ solutions to schools and families. ‘The marketing of learning loss’, Akil Bello has argued, ‘has been fairly effective in getting money allocated that will almost certainly end up benefiting the industry that coined the phrase. Ostensibly, learning loss is a term that sprung from educational research that identified and quantified an effect of pandemic-related disruptions on schools and learning. In actuality, it’s the result of campaigns by test publishers and Wall Street consultants’.

While not entirely true—learning loss has a longer academic history as we’ve seen—it seems accurate to say the concept has been actively reframed from its initial usage in the identification of summer loss. Rather than relying on psychometric instruments to assess cognitive development, it has now been narrowed to reading and numeracy assessments. What was once a paper and pen psychometric survey in the 1980s has now become a commercial industry in computerized testing and the production of policy-influencing data. But this is not the only reframing that learning loss has experienced, as the measurements produced by the assessment industry have been paralleled by the development of alternative measurements by economists working for large international organizations.

Economic hysteresis

While early learning loss studies were based in psychometric research in localized school district settings, and the assessment industry has focused on national-level results in reading and numeracy, other recent large-scale studies of learning loss have begun taking a more econometric approach, at national and even global scales, derived from the disciplinary apparatus of economics and labour market analysis.

Influential international organizations such as the OECD and World Bank, for example, have promoted and published econometric research calculating and simulating the economic impacts of learning loss. They framed learning loss as predicted skills deficits caused by reduced time in school, which would result in weaker workforce capacity, reduced income for individuals, overall ‘human capital’ deficiencies for nations, and thereby reduced gross domestic product. The World Bank team calculated this would costs the global economy $11 trillion, while the economists writing for the OECD predicted ‘the impact could optimistically be 1.5% lower GDP throughout the remainder of the century and proportionately even lower if education systems are slow to return to prior levels of performance. These losses will be permanent unless the schools return to better performance levels than those in 2019’.

These gloomy econometric calculations are based on particular economic concepts and practices. As another OECD publication framed it, learning loss represents a kind of ‘hysteresis effect’ usually studied by labour economists as a measure of the long-term, persistent economic impacts of unemployment or other events in the economy. As such, framing education in terms of hysteresis in economics assumes learning loss to be a causal determinant of long-term economic loss, and that mitigating this problem should be a major policy preoccupation for governments seeking to upskill human capital for long-term GDP growth. Christian Ydesen has recently noted that the OECD calculations about human capital deficits caused by learning loss are already directly influencing national policymakers and shaping education policies.

It’s obvious enough why the huge multitrillion dollar deficit projections of the World Bank and OECD would alarm governments and galvanize remedial policy interventions in education. But the question remains how these massive numbers were produced. My following notes on this are motivated by talks at the excellent recent conference Quantifying the World, especially a keynote presentation by the economic historian Mary Morgan. Morgan examined ‘umbrella concepts’ used by economists, such as ‘poverty’, ‘development’ and ‘national income’, and the ways each incorporates a set of disparate elements, data sets, and measurement systems.

The production of numerical measurements, Morgan argued, is what gives these umbrella concepts their power, particularly to be used for political action. Poverty, for example, has to be assembled from a wide range of measurements into a ‘group data set’. Or, as Morgan has written elsewhere, ‘the data on population growth of a society consist of individuals, who can be counted in a simple aggregate whole’, but for economists ‘will more likely be found in data series divided by occupational classes, or age cohorts, or regional spaces’. Her interest is in ‘the kinds of measuring systems involved in the construction of the group data set’.

Figures published by the OECD on the economic impacts of learning loss on G20 countries

Learning loss, perhaps, can be considered an umbrella concept that depends on the construction of a group data set, while that group data set too relies on a particular measuring system that aligns disparate data into the ‘whole’. For example, then, if we look at the OECD report ‘The Economic Impacts of Learning Loss’, it is based on a wide range of elements, data sets and measuring systems. Its authors are Eric Hanushek and Ludger Woessmann, both economists and fellows of the conservative, free market public policy think tank the Hoover Institution based at Stanford University. The projections in the report of 1.5-3% lower GDP for the rest of the century represent the ‘group data set’ in their analysis. But this consists of disparate data sets, which include: estimates of hours per day spent learning; full days of learning lost by country; assessments of the association between skills learned and occupational income; correlational analyses of educational attainment and income; effects of lost time in school on development of cognitive skills; potential deficits in development of socio-emotional skills; and how all these are reflected in standardized test scores.

It’s instructive looking at some excerpts from the report:

Consistent with the attention on learning loss, the analysis here focuses on the impact of greater cognitive skills as measured by standard tests on a student’s future labour-market opportunities. …  A rough rule of thumb, found from comparisons of learning on tests designed to track performance over time, is that students on average learn about one third of a standard deviation per school year. Accordingly, for example, the loss of one third of a school year of learning would correspond to about 11% of a standard deviation of lost test results (i.e., 1/3 x 1/3). … In order to understand the economic losses from school closures, this analysis uses the estimated relationship between standard deviations in test scores and individual incomes … based on data from OECD’s Survey of Adult Skills (PIAAC), the so-called “Adult PISA” conducted by the OECD between 2011 and 2015, which surveyed the literacy and numeracy skills of a representative sample of the population aged 16 to 65. It then relates labour-market incomes to test scores (and other factors) across the 32 mostly high-income countries that participated in the PIAAC survey.

So as we can see, the way learning loss is constructed as an umbrella concept and a whole data set by the economists working for the OECD involves the aggregation of many disparate factors, measures and econometric measurement practices. They include past OECD data, as well as basic assumptions about learning as being synonymous with ‘cognitive skills’ and objectively measurable through standardized tests, and a host of specific measuring systems. Data projections are constructed from all these elements to project the economic costs of learning loss for individual G20 countries, and then calculated together as ‘aggregate losses in GDP across G20 nations’ using the World Development Indicators database from the World Bank as the base source for the report’s high-level predictions.

It is on the basis of this ‘whole’ calculation of learning loss—framed in terms of economic hysteresis as a long-term threat to GDP—that policymakers and politicians have begun to take action. ‘How we slice up the economic world, count and refuse to count, or aggregate, are contingent and evolving historical conventions’, argues Marion Fourcade. ‘Change the convention … and the picture of economic reality changes, too—sometimes dramatically’. While there may well be other ways of assessing and categorizing learning loss, it is the specific econometric assembly of statistical practices, conventions, assumptions, and big numbers that has made learning loss into part of ‘economic reality’ and into a powerful catalyst of political intervention.

Counting the costs of learning loss calculations

As a final reflection, I want to think along with Mary Morgan’s presentation on umbrella concepts for a moment longer. As the three examples I’ve sketchily outlined here indicate, learning loss can’t be understood as a ‘whole’ without disaggregating it into its disparate elements and the various measurement practices and conventions they rely on. I’ve counted only three ways of measuring learning loss here—the original psychometric studies; testing companies’ assessments of reading and numeracy; and econometric calculations of ‘hysteresis effects’ in the economy—but even these are made of multiple parts, and are based on longer histories of measurement that are contested, incompatible with one another, sometimes contradictory, and incoherent when bundled together.

As Morgan said at the Quantifying the World conference, ‘the difficulties—in choosing what elements exactly to measure, in valuing those elements, and in combining numbers for those many elements crowded under these umbrella terms—raise questions about the representing power of the numbers, and so their integrity as good measurements’.

Similar difficulties in combining the numbers that constitute learning loss might also raise questions about their power to represent the complex effects of Covid disruptions on students, and their integrity to produce meaningful knowledge for government. As my very preliminary notes above suggest there is no such thing as learning loss, but multiple conceptual ‘learning losses’ based on their own measurement systems. There are social lives behind the methods of learning loss.

Regardless of the incoherence of the concept, learning loss will continue to exert effects on educational policies, school practices and students. It will buoy up industries, and continue to be the subject of research programs in diverse disciplines and across different sites of knowledge production, from universities to think tanks, consultancies, and international testing organizations. Learning loss may come at considerable cost to education, despite its contradictions and incoherence, by diverting pedagogic attention to ‘catch-up’ programs, allocating funds to external agencies, and attracting political bodies to focus on mitigation measures above other educational priorities.

This blog post has been shared by permission from the author.
Readers wishing to comment on the content are encouraged to do so via the link to the original post.
Find the original post here:

The views expressed by the blogger are not necessarily those of NEPC.

Ben Williamson

Ben Williamson is a Chancellor’s Fellow at the Centre for Research in Digital Education and the Edinburgh Futures Institute at the University of Edinburgh. His&nb...