Skip to main content

Code Acts in Education: Performing AI Literacy

A new international test of young people’s “AI literacy” has been announced by the OECD. Providing a global measurement of the competencies to engage with AI, the test is planned to run in 2029 with results expected at the end of 2031. The time line may be long, but as with all OECD assessments the issue is how education systems leaders and educators might respond to it in advance.

Framed as part of the 2029 PISA exercise,

The PISA 2029 Media & Artificial Intelligence Literacy (MAIL) assessment will shed light on whether young students have had opportunities to learn and to engage proactively and critically in a world where production, participation, and social networking are increasingly mediated by digital and AI tools.

AI literacy is a currently hot topic. For many commentators, it is “the” major imperative for education. Multiple definitions and frameworks circulate routinely on social media. Given the OECD’s enduring influence through educational testing, its AI literacy intervention could, then, be consequential in setting the international standard in relation to students’ competencies to engage with AI.

What the OECD test will accomplish is to provide a concrete global definition of AI literacy, subject it to quantitative and comparative measurement, and encourage educators and students to “perform” to the test. This post is an attempt to work through how the OECD AI literacy test will function, and its potential implications.

Definition

The first stage of setting up a test of any competency is providing a clear definition of what is to be tested. The OECD offers its current definition of Media & AI Literacy as “the set of competencies to interact with digital content and platforms effectively, ethically and responsibly.”

It elaborates that “it is essential to assess and develop the competencies students need to understand how digital and AI tools work, the human role in digital tools and media, the social and ethical consequences of using digital and AI tools, how to communicate and collaborate effectively with digital and AI tools, and how to critically evaluate the media content.”

By mixing media literacy and AI literacy, the OECD has set itself a significant challenge. There is a very long history of efforts to conceptualize various forms of media, digital, data and AI literacies, and all of them involve some power struggles over definition.

Media educators, for example, have long sought to position media literacy in terms of the “3Cs” of consumption, creation, and critique. Educators focused on “digital literacy” have similarly advocated for students becoming critically informed about digital technologies, vendors and discourses rather than passive consumers and content creators alone.  

Precisely what kinds of literacies are taught to young people is always a political matter, helping to define how students are oriented as readers and writers, or consumers and producers, in their own social and technical contexts. This raises the risk of excluding certain kinds of literacy. And in the current political context, AI is not simply a “tool” for new forms of creativity and consumption, but a profoundly political technology requiring particular critical literacies.

James O’Sullivan recently argued, provocatively and compellingly, the case for “AI illiteracy,” noting that most discussion frames AI literacy “as a form of compliance: one learns the tools so as not to fall behind”:

There is, of course, a place for AI literacy—particularly where it enables informed critique, ethical resistance, or meaningful co-creation. But we must be cautious of treating it as a universal good. There is a difference between understanding enough to critique and absorbing so much technical detail that one begins to mistake function for truth. The illusion of understanding—the belief that being able to prompt a chatbot effectively constitutes deep knowledge—may be more dangerous than ignorance.

The provocative notion of AI illiteracy therefore foregrounds the politics of calling for universal AI literacy standards and the kinds of exclusions in terms of knowledge, learning and skills that most definitions impose.

The OECD is no neutral actor when it comes to AI—it has for several years promoted the use of AI in education and regularly produces epochal statements about its transformative effects. Much of its emphasis is on AI effects on the economy, and thus on maintaining productivity and economic progress through technological upskilling. It is now, through the AI literacy test, exerting its political authority over how AI skills are to be defined and valued.

Its aim is to produce a globally standardized definition of AI literacy that is likely to erase nuance and diversity in approaches to thinking about young people’s engagement with this family of controversial technologies and their contested effects. How much of the criticality of media literacy remains, once the OECD has defined it in terms of “evaluation” skills, remains to be seen, though you could imagine it being reduced to a measurable matter of whether a user checks the accuracy of a prompt response.

Beyond the political decisions it will have to make about defining AI literacy, and what to include or exclude from the test, the OECD also faces a technical and methodological challenge: how to operationalize it as a series of test items that can be measured.

Quantification

Available documentation indicates that the OECD’s test developers are currently working on instruments for the test. The OECD has long positioned itself as a source of metrological expertise and innovation.

This means it is now translating AI literacy into a testable elements that can generate quantitative and comparable results. How exactly the test will appear, what data it will generate, and how it will be analyzed, remains as yet unclear.

An illuminating case is the OECD’s earlier test of social and emotional skills, which involved the production of a condensed psychometric scheme for the enumeration of student emotions. What this meant was that the OECD conducted a large review of available psychometric instruments, before finally settling on an adapted version of the “Big 5” personality characteristics model.

In the process it erased other competing models for the measurement of social and emotional learning, imposing a global framework by which such skills or qualities may be measured and compared internationally. In the process, it has produced a distinctive way of understanding “emotion” as an educational issue.

The model for assessing AI literacy is likely to follow a similar path. It will require AI literacy to be broken down into a series of measurable units which can feature as testable items. In fact, the OECD has already begun quantifying the capacities of AI itself in order to enact comparisons with human skills, as the basis for highlighting the necessary competencies for education systems to teach.  

In the description of the AI literacy assessment, the OECD  claims it is exploring assessment innovations to capture these human capacities:

This new assessment is envisioned as a simulated environment that would allow the collection of evidence for multiple competencies of the literacy model. These competencies are assessed using a variety of functional tools that are accessible to students in a realistic way throughout the assessment (e.g., realistic simulations of the internet, social media, and generative AI tools).

As such, the assessment is not expected to be a conventional test, but organized as simulations making use of accessible existing “tools.” It will mobilise such tools to test a specifically defined “model” of AI literacy and its componentized “competencies.”

The eventual output of the assessment, in common with all OECD tests, must be numbers enabling the evaluation and comparison of programs, systems and countries.

AI literacy must become AI literacy-as-numbers, subject to techniques of metrology and amenable to comparison.

It’s not hard to imagine there being eventual winners in the AI literacy rankings. This is likely to encourage policy officials, especially in the US, China and Europe, to compete for position on the ordinal AI literacy scale. And it is such numbers, of course, that the OECD can mobilize to identify best practices for others to emulate.

Performativity

As with all OECD tests, the significance of the AI literacy assessment is not necessarily the quantitative results it will produce in more than half a decade’s time. It’s the activity that it incites in education systems in anticipation of the assessment. The existence of a test animates significant efforts to prepare students to take it. In this case that effort would serve the OECD’s overarching vision of AI as a transformative social and economic force requiring the participation of an AI literate population.

Indeed, the OECD is not only producing an AI literacy test, but collaborating with the European Commission on an “AI Literacy Framework” to be launched for consultation at the end of May 2025. The framework will consist of “the knowledge, skills and attitudes that will adequately prepare students in primary and secondary education”:

The initiative will also provide the foundation for the first assessment of AI Literacy in the OECD Programme for International Student Assessment (PISA) and support the EU’s goals to promote quality and inclusive digital education and skills. At its heart is the integration of AI literacy across school subjects. This includes teaching students to use AI tools as well as how to co-create with them and reflect on responsible and ethical use.

As well as offering the “foundation” for the test, this framework appears to normalize the idea of integrating AI into schooling itself. For AI literacy to the tested, it must not only be taught, but become part of the normal routines of teaching and learning. The incorporation of AI into schooling is highly contested, with at least one key argument (among many others) being that it serves the market-making interests of technology and edtech businesses who see lucrative opportunities in introducing AI into schools.

Besides the OECD and the EC, another partner is code.org, the organization that has promoted “learning to code” internationally and was recently a major cheerleader of the US Executive Order “mandating” the use of AI, and the promotion of AI literacy, in American education. Audrey Watters argues that the kind of AI literacy promoted by the order serves governmental and commercial interests, but is likely to lack any criticality due to the US administration’s hostility to ideas about bias and discrimination.

More prosaically, the involvement of code.org as an industry-centric organization suggests the AI literacy framework may focus centrally on skills of AI use (“learning to prompt” as the new “learning to code”) than any critical engagement with AI as a social and public problem.

In its most attenuated form, AI literacy can appear as a kind of technical training in efficient AI use—what I’ve elsewhere called “pedagoGPT” courses and classes. For some, educating students with AI literacy is even primarily a geopolitical and economic matter, as nations compete for talent and dominance in a new “space race.”

Luci Pangrazio has argued that similar kinds of “digital literacy” programs sponsored by technology companies now function as a form of governance in schools:

If digital literacy is defined and developed in relation to the different platforms and apps in schools, and if the platforms and apps in schools are increasingly designed to monitor and surveil (control) staff and students because that is how learning is evidenced, then digital literacy has become a powerful way of governing both teachers and students in schools.

In other words, the commercial co-optation of “digital literacy” has already led to it becoming a kind of technical training that habituates and accustoms students and teachers to using the technology, which also exerts influence over their practice. Extrapolating from this case, we can see how the OECD’s AI literacy test and the associated framework might exert a kind of governing pressure on schools, educators and students to act accordingly.

It is the use of a test to incite anticipatory actions that is often referred to as “performativity.” In this context, performativity refers specifically to the ways that measurements, such as those produced by a test, compel education leaders and teachers to “teach to the test” in order to perform well in the measurement exercise.

The OECD PISA test is well known as an engine of performativity, inciting policymakers and system leaders to act to improve “performance” in order to rank highly in the results. PISA tests enable the statistical governance of education by compelling people to perform in reaction to the ranking.

As part of PISA 2029, AI literacy thus appears as if it will become part of international educational ranking through international standardized testing. Educators and leaders may feel compelled to act on the OECD/EC/code.org framework in order to perform effectively on the test. This will harden the definition of AI literacy designated by the OECD, making AI skills and competencies into globally standardized and comparative indicators. AI literacy-as-numbers could make acting on AI a central preoccupation of schooling systems.

Infrastructuring AI literacy

While this is necessarily a little speculative, it’s important to recall the influence of the OECD on education systems worldwide. Its testing infrastructures enable the production of quantitative data at large scale, and drive educational decision-making and policy agendas. It’s worth remaining attentive to these ongoing political efforts to integrate AI literacy into schools, and the testing infrastructure the OECD is creating to enumerate students’ AI skills and competencies.

The OECD’s efforts should be understood as an attempt at infrastructuring AI literacy. Infrastructuring AI literacy means building, maintaining and enacting a testing and measurement system that will not only enumerate AI competencies, but make AI literacy into a central concern and objective of schooling systems.

It would function as an infrastructure of measurement that requires educators’ and students’ participation to construct international standardized data about AI literacy. They would have to perform to the measurement standards embedded in the infrastructure, following the prescriptions of the AI literacy framework, in order to “count.”

Tracing the development and evolution of the AI literacy assessment much more fully will help illuminate how AI is being conceived as an educational concern, how it is being worked into metrological systems, and how a measurement exercise that remains several years in the future may incite anticipatory actions in education settings. How will educators and students be enrolled into this OECD infrastructure of AI literacy measurement, and perform AI literacy in advance of the assessment?

 

This blog post has been shared by permission from the author.
Readers wishing to comment on the content are encouraged to do so via the link to the original post.
Find the original post here:

The views expressed by the blogger are not necessarily those of NEPC.

Ben Williamson

Ben Williamson is a Chancellor’s Fellow at the Centre for Research in Digital Education and the Edinburgh Futures Institute at the University of Edinburgh. His re...