Code Acts in Education: Constructing AI in Education
Most definitions of “AI in education” start from technical categories. First there was rules-based AI, followed by data-driven predictive AI, and now generative AI for education. The picture changes, however, if we ask how AI is socially constructed as a topic of interest in education. Focusing on the social construction of AI in education brings into view the varied ways that different social groups define it according to their own concerns and interests.
In an excellent 2020 paper, Rebecca Eynon and Erin Young argued that “AI is not one thing: it is a complex sociotechnical artifact that needs to be understood as a phenomenon constructed through complex social processes.” Building on the “social construction of technology” approach in science and technology studies, they elaborated how AI has “different meanings for different people,” with significant effects on how such technologies are interpreted, used, or even rejected.
More specifically to the growing interest in AI in education at the time, they suggested that there are three key “relevant social groups” that each “construct and frame AI differently.” These three groups are academics, industry organizations, and policy groups.
Academics interested in developing AI in education, Eynon and Young suggested, construct or frame AI as a methodology for advancing insights into processes and practices of learning and teaching. Industry—which includes both edtech firms and large global tech companies with a stake in education—frames AI as the basis for market expansion and profit. Finally, policymakers, in their analysis, tend to frame AI in terms of legitimizing reformatory aims, such as upskilling economically productive knowledge workers.
“Stakeholders construct AI differently,” Eynon and Young argued, “in ways that are useful to them. This, in turn, defines future developments and these conceptualizations (and the fact they are different) will influence how we frame the future.” Elsewhere in the paper, they wrote, “AI is a complex social, cultural, and material artifact that is understood and constructed by different stakeholders in varied ways, and these differences have significant social and educational implications.”
Those three constructions certainly struck me as a persuasive at the time. The paper informed a special issue of critical perspectives on AI in education that Rebecca and I edited together back in 2020, and underpinned a short piece I wrote a couple of years later, just as ChatGPT was launched, where I suggested it is important to pay attention to the “social life of AI in education.”
But how AI is constructed in the education sector has become ever-more fragmented as a growing range of constituents have taken it up as an object of concern and interest. That means we are now faced with multiple competing frames for AI in education, each of which seeks to shape the direction of development and enactment in concrete settings and practices. This multiplication of framings of AI in education is important to acknowledge and articulate more fully, then, because they each have implications for the future of education. So in this post I want to revisit the Eynon and Young paper, proposing we examine a much wider range of relevant social groups and their constructions of AI in education.
How relevant groups frame AI in education
Clearly the framings of AI in education by different groups that I outline next are not comprehensive, and there will be obvious exceptions within each category. They simply represent a partial, work-in-progress selection of characterizations based on observing developments in this space over many years. If anything, these characterizations might serve as the basis for other studies to investigate how AI is constructed by various actors across education systems in a range of contexts. The significance of doing so is to trace out how certain constructions of AI get prioritized, build momentum, and become embedded into practices and systems—and then to track the effects of these framings as they are enacted.
Government agencies. Government policymakers still often construct and frame AI in education as a rationale for calling for reforms targeting student skills for economic purposes. However, this is also being extended into framing AI as an experimental policy technology. Take this excerpt from a recent speech by the Secretary of State for Education in England:
So here’s my vision for the future. A system in which each and every child gets a top class education, backed by evidence based tech and nurtured by inspiring teachers. A system in which teachers are set free by AI and other technologies, less marking, less planning, less form filling. … We’re deploying AI to make that vision a reality, recognising it as the game changer that it is.
Here AI is constructed as a “game-changing” technology that will reduce administration and “set free” teachers by reducing their workload. It’s a seductive “vision” to be made into “reality” by “deploying AI”.
The crucial terms here are “evidence based” and “deploying.” The Department for Education has engaged in extensive experimentation in the development of AI in education through a programme of prototyping and pilot testing in schools. The UK government has also signed a memorandum of understanding with Google DeepMind, part of which is to give access to Google to curriculum materials to develop a version of its Gemini chatbot finetuned to the national curriculum, and to continue research into AI in UK schools. Deployment here precedes evidence of efficacy.
In another national context, the Estonian Ministry of Education and Research has commenced training thousands of students and teachers to use AI, as part of a public-private partnership programme that also involves funding for access to ChatGPT and Gemini for teachers, “continuously developing special learning apps”, and longitudinal measurement following a series of metrics and benchmarks.
As these examples indicate, government groups have now positioned AI as a site of experimentation, in two senses. First is that it constitutes new public-private experiments in public education delivery through private sector technology involvement.
Second, these live developments depart form a long history of “evidence-based” practices in education to focus instead on live trials of prototypes and continuously testing applications in classroom contexts. This is experimental “evidence-for-policy” production shaped by the seeming necessity of accelerated technological deployment, with evaluation and evidence of efficacy to follow only once the tech is embedded and in widespread use. In this sense, it constitutes a form of “testing-in the-wild” that frames AI as an experimental policy technology and configures classrooms as live AI test sites.
Education leaders. The ways that education leaders have responded to AI can in many examples be characterized as a fear of missing out (FOMO). Perhaps the best example of this is the rapid signing of agreements by universities around the world with AI companies to provide chatbot applications to students and staff. The California State University system was one of the first institutions to sign a multimillion dollar deal with OpenAI for access to ChatGPT-Edu, followed by many others.
The discursive framing of these agreements routinely invokes the language of innovation and progress, figuring the institutions involved as first-movers, sector leaders and bold exemplars for other institutions to follow and emulate. This constructs FOMO as a catalyst for educational leadership decisions, with institutional managers across the education sector apparently convinced of the necessity of joining the rush to integrate AI applications into teaching, learning and research processes and practices.
Even the US teachers’ union, the American Federation of Teachers, has an agreement with several tech companies to develop AI training programmes, which is intended to prevent educators from missing out on the opportunities of AI. As such, the sectoral leadership of teacher unions also appears to frame AI in terms of FOMO.
Educational advocates. Within educational institutions, whether at HE or school level, are many individuals and collectives who help construct AI in terms of benefits to learning and teaching. Educational AI advocacy takes some different forms and reflects the extent to which it is useful for its proponents. First, there are the academics who work in AIed research or related fields like learning analytics, for whom AI is framed as a technology for advancing research into learning, and often simultaneously as a promising way to boost learning performances. This research varies considerably in quality and standard, with some deploying well-established methodologies for the measurement of learning gain, while others have been shown to lack rigorous quality control standards or even to violate scientific integrity.
Others frame AI in ways that support their ideas and theories about learning as a process that involves humans and technologies of many kinds. Theoretical ideas and concepts such as “cognitive assemblages” are here (selectively) mobilized in ways that appear to legitimize the deployment of products such as pedagogic chatbots. (Which is not to say that theorizing human-technological learning assemblages is a bad idea—in good hands this makes interesting contributions—but to think about what purposes these ideas are advanced to support, with what effects, and what they obscure or elide.)
There is a certain amount of intellectual entrepreneurship to such efforts. Pedagogical theories and philosophical concepts are put forwards as ideal ways to think about and advocate for AI in education. As a journal editor, I can report we receive a lot of submissions like this, most of which claim to be offering a novel theory of human-AI synthesis and learning. Much worse variants of it can be found on social media platforms like LinkedIn. This is intellectual entrepreneurship in the sense of advancing one’s claims to expertise and leadership over how AI is framed and understood, and attempting to shape debates around those claims, often while dismissing “critical” voices as out of touch technically, pedagogically and intellectually.
Educational pragmatists. The language of “responsible” use of AI has become commonplace in discussions of AI in education. When OpenAI invokes it, one could be forgiven for some scepticism. However, there are many educators who are trying to carve out a pragmatic response to AI that treats it as an existing technology already in use by many students, and for which educational institutions therefore need coherent responses.
Here AI is framed as a problem—such as how it encourages cheating, “cognitive offloading” and so on—but also an opportunity if students and staff can use it responsibly and beneficially, with sensible safeguards, governance mechanisms, ethical frameworks and policies in place to regulate its use. We might characterize it as constructing AI as an object of practical governance. Even the Department for Education in England has moved towards such a framing, by releasing new “product safety standards” for AI in schools to protect students from associated harms, while continuing to promote its benefits for both teaching and learning.
Such approaches clearly construct and frame AI in a distinctive way in terms of both educational problems and promise at the same time. The strongest versions of this pragmatic approach, for instance, frame AI as a significant threat to education, and particularly deleterious to students’ practices of studying, thinking and writing. Then they propose various kinds of AI capacities or literacies that will enable educators and students to make uses of AI that enable their thinking, extend their understanding, or help them advance and finesse arguments in written or multimedia form. Some groups of educators are working more creatively to build applications based on AI models to serve pedagogic purposes.
Others emphasize the importance of good governance of AI, reflected in institutional policies and guidance for staff and students that are intended to guard against mis-use and encourage responsibility within defined ethical boundaries. Indeed, one unanticipated consequence of AI in education over the last few years has been how much time many of us now need to invest in contributing to policies, guidance and new forms of paperwork and compliance to ensure responsibility over AI use. Nonetheless, questions remain about how such institutional AI policies and guidelines might “normalize the institutional adoption of AI, bypassing serious critical scrutiny.”
Big AI corporations. The global technology companies that own and control AI applications have been relentless is promoting their technologies for education. These efforts go beyond making AI applications available to educational institutions. Some aim to become part of the “core infrastructure” of education (as the vice president of education at OpenAI put it), fully integrated into educational systems and workflows.
This framing of AI can be understood as infrastructure capture. The ways AI is constructed here is as an underlying “operating system” of technologies (interfaces, cloud computing, data, software services) that can underpin and power a multitude of educational operations and practices—and which tech companies rather than institutions themselves control. In the US, the White House has actively sought the involvement of big tech companies to deliver AI in education, as this infrastructure capture by tech firms has become embraced as a political objective.
A significant part of this infrastructure capture by “Big AI” firms is the creation of new dependencies of institutions on technology suppliers. This framing of AI sets the scene for these firms’ aims to secure long-term enterprise subscription contracts with schools, colleges and universities. These contracts and agreements are helped by the FOMO of education leaders, who have proven to be highly amenable to signing up for expensive deals that ultimately enable Big AI firms to become infrastructural to whole institutions and to the education sector as a whole.
While in Eynon and Young’s analysis it was clear how technology companies sought to profit from framing AI as an educationally-relevant technology, here the aim is not only profit from product sales but full integration into everyday institutional practices. This is at least partly a programme of habituating and accustoming educators as “friendly users” who will embed AI into their pedagogies, such that AI becomes as familiar a part of everyday educational infrastructure as libraries and learning management systems.
Edtech industry. Educational technology companies have sought to piggyback on the creation of AI by big tech firms by rapidly integrating AI functionality into their own products and platforms. This has become an almost obligatory move in the edtech industry, as investors have flocked to fund AI applications following a slump in edtech markets after Covid restrictions were lifted. Here, then, the edtech industry is framing and constructing AI as a post-pandemic market recovery technology.
Edtech companies have repositioned themselves as AI companies by integrating underlying models produced by AI companies into their products, shaping their marketing claims around AI, and promising to help customers become AI-powered institutions. This is animated, at least in part, by desires to regain the market momentum that carried some edtech companies to huge valuations during lockdowns, with market forecasts indicating promising potential from AI integrations. Companies that have been unable to exploit AI for post-pandemic market recovery—most notably Chegg—have suffered catastrophic collapses in value.
AIEd evangelizers. Both edtech and Big AI firms are supported by a new industry of AIEd evangelizers and proselytizers who have staked their careers on promoting the educational benefits of AI. These evangelists construct AI as a source for developing their reputational capital and career-building. They are prominently found on LinkedIn, publish Substack newsletters, produce podcasts, post YouTube explainers, and write columns as independent contributors for venues such as Forbes (not the main magazine, but the blog site).
AI is constructed by evangelizers in terms of simplified and reductionist models of learning such as “personalisation” or “mastery,” and often draws very selectively on research such as Benjamin Bloom’s “2-sigma” 1984 paper on the benefits of one-to-one tutoring. The AIEd proselytizer also constructs AI affectively, mobilizing an enthusiastic discourse of disruption and transformation that seems designed to induce a sense of awe and inevitability in its intended audience.
Claims to expertise in AI in education are made in ways that are often supposed to attract clients and customers, such as schools and universities, that are looking for consultancy, guidance and professional training to deploy AI in their institutions. In this way, evangelizers seek to build reputational capital and career prospects from their upbeat framing of AI. This further gives them the potential to receive invitations to become guest keynote speakers at industry events, as well as getting tapped by journalists for soundbites, being name-checked by other educational “thought leaders” with high LinkedIn follower counts, writing practitioner guidebooks, or picking up awards and plaudits for their consultancy work.
Education investors. Finally, educational technology investors frame AI in terms of the financial returns it is calculated to generate in the future. For investors, AI must be constructed as a kind of asset from which financial value can be generated. If AI is framed as an asset, with calculable future value, then investors can confidently fund products and services in the expectation of a big payday. This explains why educational technology investors are so keen to fund AI products and have made AI a central part of their investment theses and portfolios. They are assetizing AIEd as a source of continuous income.
Major philanthropic funders are heavily involved in investing in AI in education too. The Gates Foundation, for example, has committed lavish finances to schemes in the Global South in collaboration with the sovereign investment fund of Abu Dhabi, part of the $40m programme for seeding an investment fund:
The EdTech and AI Fund, a new multi-investor vehicle set to launch next year, will scale proven EdTech and AI solutions across sub-Saharan Africa. Jointly anchored by ADQ and the Gates Foundation, it will be the first fund dedicated to national-level expansion of interventions shown to improve foundational learning.
What matters to investors is that their assets keep generating value over the long term. As such, investors are particularly interested in “scalable” products and services that can reach very large numbers of customers and command ongoing fees for use. This is why the platform subscription model has proved so attractive to edtech investors, and why AI is framed and constructed as a new kind of asset that promises high return on investment as it scaled to untapped markets, such as those in the Global South.
Moreover, these kinds of investments help to materialize particular ideas and images of what education is and should be in actual educational practices and products; they ultimately set the direction for AI in education through funding mechanisms, and hen sediment it into the particular forms preferred by investors for their earnings potential. Framing AI as an asset is therefore highly consequential for the future of education itself.
Educational abolitionists. In response to many of the above ways AI is being constructed in education, some educators and commentators take an activist and campaigning stance against AI. Broadly, these can be understood as framing AI as an artefact that should be subject to concerted critique and resistance.
Those who take this position are sometimes characterized—positively—as representing a return of the Luddite movement. Historically, the Luddites sought to destroy technologies that threatened their labour and livelihoods; their efforts were politically motivated rather than animated by animosity to technology or innovation itself. A similar political motivation underpins some contemporary critical AI activists, who frame AI in similar ways to how the Luddites framed automated manufacturing machines, and argue educators should “teach like a Luddite.”
Not all educators and academics who write critically about AI subscribe to a Luddite or abolitionist position. Nonetheless, there is a fast-growing body of writing that argues AI is antithetical to education for various reasons: undermining students’ intellectual development and academic independence, imperilling the whole institution of education, and even endangering the knowledge practices and infrastructures that underpin democratic societies. For many, the problem of AI in education is not just the technology, but that it is owned and controlled by a very narrow class of “tech oligarchs” and companies that foreground their own reductionist models of learning and education as the basis for profit-making—or that are even politically hostile to the public institution of education altogether and would wish to privatize it for commercial gain and market share.
There is increasing recognition of these critical arguments in the media and press, which frame AI in education in terms of the harms, dangers, and risks it poses. Academic educators have likewise put AI into the context of long-running trends to privatize, surveil, control, monetize and exploit public education. They collectively call for a resistant stance on the part of educators to protect the sector from AI.
Reconstructing AI in education
These multiple constructions of AI in education matter because how different constituents imagine and enact AI will lead to material effects in educational settings. At the end of their original paper on how AI is constructed by different educational stakeholders, Rebecca Eynon and Erin Young stated that “the different frames of AI across academia, government, and industry lead to a range of social and educational implications that may not be entirely positive.” They suggested future work could extend the focus to other social groups “to explore how the constructions of AI lead to certain ways of working and operating over time.”
My notes here point to the other relevant social groups, and their shifting framings, that should now be the focus for further studies if we want to better understand the directions in which AI in education is being taken, often with educational implications that are not “entirely positive.”
But if AI in education is affected by how it is socially framed and constructed by relevant groups, then that opens it up to re-framing and re-construction. Personally, I would probably locate myself among the more critical, resistant educators characterized above, with an increasing—but cautious—appreciation of the pragmatic pedagogic and governance work that needs to be done now AI is already in our systems and institutions.
Both of these approaches re-frame and re-construct AI as what I previously described as a “public problem” rather than an “entirely positive” phenomenon. It has been encouraging to see European educator unions promoting the idea of teachers as “active shapers of policy and practice” around AI, which indicates how re-framing AI in education has become a collective union matter of concern. It still seems to me to be important to continue the work of re-framing AI in education before the dominant constructions outlined above become so settled and integrated into our institutions and practices that they prove impossible to deconstruct.
This blog post has been shared by permission from the author.
Readers wishing to comment on the content are encouraged to do so via the link to the original post.
Find the original post here:
The views expressed by the blogger are not necessarily those of NEPC.