Code Act in Education: Automated Austerity Schooling
The Department for Education for England has announced a £4 million plan to create a “content store” for education companies to train generative AI models. The project is intended to help reduce teacher workload by enabling edtech companies to build applications for marking, planning, preparing materials, and routine admin. It is highly illustrative of how AI is now being channelled into schools through government-industry partnerships as a solution to education problems, and strongly indicates how AI will be promoted in English education under the new Labour government.
Most of the investment, allocated by the Department of Science, Innovation and Technology as part of its wider remit to deploy AI in the public sector, will be used to create the content store.
The £3m project will, according to the education department’s press release, “pool government documents including curriculum guidance, lesson plans and anonymised pupil assessments which will then be used by AI companies to train their tools so they generate accurate, high-quality content, like tailored, creative lesson plans and workbooks, that can be reliably used in schools.”
A further £1m “catalyst fund” will be awarded to education companies to use the store to build “an AI tool to help teachers specifically with feedback and marking.”
As Schools Week pointed out, none of the money is going to schools. Instead, the early education minister Stephen Morgan claims it will “allow us to safely harness the power of tech to make it work for our hard-working teachers, easing the pressures and workload burdens we know are facing the profession and freeing up time, allowing them to focus on face-to-face teaching.”
Whether £4m is enough to achieve those aims is debatable, although it appear to signify how the government would rather allocate a few million on tech development than other ways of addressing teacher workload.
Edtech solutionism
Putting aside for a moment the question of the reliability of language models for marking, feedback, planning and preparation — or the appropriateness of offloading core pedagogic tasks from professional judgment to language-processing technologies and edtech firms, or all the many other problems and hazards — the project exemplifies what the technology and politics critic Evgeny Morozov has termed “technological solutionism.”
Technological solutionism is the idea that technology can solve society’s most complex problems with maximum efficiency. This idea, Morozov argues, privileges tech companies to turn public problems into private ones, to produce “micro-solutions to macro-problems,” from which they often stand to gain financially.
One consequence of this is that many decisionmakers in positions of political authority may reach for technological solutions out of expedience — and the desire to be seen to be doing something — rather than addressing the causes of the problem they are trying to solve.
The DfE/DSIT project can be seen as edtech solutionism in this sense. Rather than addressing the long-running political problem of teacher workload — and its many causes: sector underfunding, political undermining of the teaching profession… — the government is proposing teachers use AI to achieve maximum efficiency in the pedagogic tasks of planning and preparation, marking and feedback. A similar approach was previously prototyped, when the DfE under the prior government funded Oak National Academy to produce an AI lesson planner.
The trend represented by these projects is towards automated austerity schooling.
Schools in the UK have experienced the effects of austerity politics and economics for almost 15 years. The consequences have been severe. According to UK Parliament records, the overall number of teachers in state schools has failed to keep pace with student numbers, resulting in an increase in the student to teacher ratio and some of the highest working hours for teachers in the world, compounded by continued failure to meet teacher recruitment targets.
The government is investing in a low-cost technological solution to that problem, but in a way that will also reproduce it. School austerity will not be solved by automated grading; it will sustain it by obviating the need for political investment in the state schooling system.
Algorithmic Thatcherism
The UK’s finances are pretty parlous, and the government is warning the country of further economic pain to come, so it may seem naïve to imply we should simply allocate public funds to schools instead of investing it in AI. But failure to address the underlying problems of the state schooling sector is likely to lead to layering on more technological solutions in pedagogic and administrative processes and practices with few regulatory safeguards, and continued automation of austerity schooling with unknown long-term effects.
The critic Dan McQuillan argued last year that efforts under the Conservative government to deploy AI across public services represented a form of “algorithmic Thatcherism.” “Real AI isn’t sci-fi but the precaritisation of jobs, the continued privatisation of everything and the erasure of actual social relations,” McQuillan argued. “AI is Thatcherism in computational form.”
Algorithmic Thatcherism prioritizes technical fixes for social structures and institutions that have themselves been eroded by privatization and austerity, privileging the transfer of control over public services to private tech firm. “The goal isn’t to ‘support’ teachers and healthcare workers,” concluded McQuillan, “but to plug the gaps with AI instead of with the desperately needed staff and resources.”
Automated Blairism
Despite a change of government, automated austerity schooling looks like being supercharged. The new Labour government is strongly influenced by the Tony Blair Institute, the former Prime Minister’s organization that has been termed a “McKinsey’s for world leaders.”
The TBI has heavily promoted the idea of AI in UK government, including education. Blair’s New Labour administration in the late 1990s and early 2000s was characterized by investment in public services, often through public-private partnerships and increasing private sector influence; a push for data-driven accountability measures in education; increased choice and competition; and significant boosting for technology in schools.
The TBI recently announced a vision for a “reimagined state” that would use “the opportunities technology presents – and AI in particular – to transform society, giving government the tools it needs to do more with less, make better public services a reality and free up capital for other priorities. … More, better, cheaper, faster.”
This is a vision the Labour party is now acting on hastily. According to a report in Politico, the “plans have tech firms — some of whom have partnerships with Blair’s institute — swarming, lured by the tantalizing prospect of millions of pounds of public contracts.”
Doing “more, better, cheaper, faster” with AI in government services represents the triumph of automated Blairism. It combines political technology solutionism with privatized influence over the public sector, all under the guidance of policy and technology consultants bankrolled by tech billionaires.
The TBI’s latest manifesto for AI governance in the UK was co-produced with Faculty AI, the private tech firm that previously worked with the Conservative government on, among other things, plans for AI in education. Ideals of putting AI into our governance institutions and processes are not intrusions from the commercial realm; they are already embedded in contemporary forms of computational political thinking in the UK.
Real-time environments
Under the Blairist imaginary of technological transformation of the UK state, its visionary prospectus for “tech-enabled” UK education invokes the promise of “personalized learning” — formerly a favoured education slogan under Blair’s New Labour administration in the early 2000s – and AI “to revolutionise the experience of pupils and teachers.”
Among five key priorities for technology in education, the TBI set the stage for the current DfE project on AI and teaching with its claim that “Technology and AI can provide new ways of organising the classroom and working days, and supporting marking, lesson planning and coordination.”
But the TBI vision goes beyond the immediate aims and constraints of the £4m DfE fund. It imagines “expanding the use of edtech” and giving all school children a “digital learner ID.” The digital ID would contain all their educational records and data, enabling real-time automated “analysis of students’ strengths and weaknesses,” which it argues would “simplify coordination between teachers and allow them to use AI tools to plan lessons that are engaging and challenging for all pupils.”
“Most importantly,” the TBI insists, “it would allow teachers to shift repetitive marking tasks to adaptive-learning systems. A move to a real-time data environment would mean AI could mark a class’s work in less than a second and provide personalised feedback. This wouldn’t replace teachers but instead free them up to do what they do best: teach.”
In keeping with the Blairist approach of old, the TBI envisages a greater role for the private sector in building the digital ID system; sharing student data with the edtech industry for innovation and development; and a wholly data-driven approach to school accountability by using the digital ID data for school performance measurement and enhancing parent choice.
The current £4m DfE project to use AI to help teachers, then, looks like just the first step in a possible longer program of technosolutionist policy — focused on turning schools into real-time data analysis and adaptive environments — that will sustain and reinforce automated austerity schooling as the new normal.
Whether the other TBI proposals on AI and “real-time” data analysis and adaptive technologies in education come to fruition is a matter for speculation just now (they’re not new ideas, and haven’t come to much yet despite many industry and investor efforts). But with the international clout of Blair, and the influence of the TBI in the fresh Labour government, the vision will certainly have considerable visibility and circulation within the government departments responsible for public services.
The edtech industry will certainly be queuing up for potential future contracts to participate in the proposed transformation of English state schooling. [Update: in the 9th September 2024 call for the competition, it was stated that: “You must demonstrate a credible and practical route to market, so your application must include a plan to commercialise your results” and “Only the successful applicants from this competition will be invited to potential future funding rounds.”]
AI fails
Going all-in on AI in education is a risky move. The recent case of the Los Angeles school district is cautionary. After $6m of public investment in an edtech company to build a chatbot for the entire state schooling system, the contractor folded only months later, potentially imperilling masses of student data that it had failed to adequately protect.
Observers suggested the technological vision was far too ambitious to be achievable, and that the company’s senior management had little idea of the realities of the schooling system. The CEO is said to have claimed that they were solving “all the problems” facing large schooling systems just before the company collapsed.
Public opinion may not support AI in schools either. When the new education minister Stephen Morgan announced the DfE project, he did so from South Korea, at a major event on AI in education, favourably comparing the two countries’ technological prowess and approach to educational innovation. But only a few days before, the Financial Times reported a significant public backlash against the South Korean government’s own plans for AI in education. It was only a few years ago that school students in London chanted “fuck the algorithm” to protest their grades being adjusted by an automated system. Failure to consider public opinion can be disastrous for big tech projects in the public sector.
More broadly, investment organizations and businesses are beginning to sense that the “AI bubble” of the last two years may be about to burst, with companies reporting lack of productivity benefits and investors spooked by weak financial returns. The prospects for AI-based edtech are unclear, but could equally be affected by investor flight and customer disinterest. Building public services on AI infrastructure despite industry volatility and public concern may not be the wisest allocation of public funds.
It is understandable that teachers may be using AI in the preparation of materials, and to automate-away administrative tasks under current conditions. But the risks of automated austerity schooling — eroding pedogagic autonomy, garbling information, privacy and data protection threats, enhancing classroom surveillance, and far more — remain significant and underaddressed. Letting AI in unobstructed now will likely lead to layering further automation on to pedagogic and administrative practices, and locking in schools to technological processes that will be hard and costly to undo.
Rather than seeing AI as a public problem that requires deliberation and democratic oversight, it is now being pushed as a magical public-private partnership solution, while both old problems with school structures and the many new problems AI raises in public service provision remain neglected. The DfE’s AI content store project is a first concrete sign of the solutionism that looks set to characterize automated austerity schooling in England under the new government.
This blog post has been shared by permission from the author.
Readers wishing to comment on the content are encouraged to do so via the link to the original post.
Find the original post here:
The views expressed by the blogger are not necessarily those of NEPC.