Skip to main content

Code Acts in Education: Oblongification of Education

According to Microsoft and Google, artificial intelligence is going to be fully integrated into teaching and learning in the very near future. In the space of just a few days, Google announced its LearnLM automated tutor running on the Gemini model, and Microsoft announced it was partnering with Khan Academy to make its Khanmigo tutorbot available for free to US schools by donating access to the MS Azure Open AI Service. But it remains very hard to know from these announcements what the integration of AI into classrooms will actually look like in practice.

The promotional videos released to support both announcements are not especially instructive. Google’s LearnLM promo video doesn’t show students interacting with the tutor at all, and the main message is about preserving the “human connection” of education.

The Microsoft promo for Khanmigo doesn’t really reveal yhe AI in action either, though it does feature a self-confessed “defeated” teacher watching the “miracle” bot automatically produce a lesson plan, with Khan Academy’s director of engineering suggesting it will remove some of the “stuff off of their plate to really actually humanize the classroom”.

You’re unlikely to see many more idealized representations of “humanized” school classrooms than these two videos, not least because you barely see any computers in them—except the odd glimpse of a laptop—and the AI stuff is practically invisible.

A better indication of what AI will look like when it hits schools is a promotional video from Sal Khan showcasing the OpenAI GPT-4o model’s capacity for math tutoring just a week earlier. Now, this isn’t a great representation of a typical school either – it’s Sal and his son in a softly-lit lounge with an OpenAI mug on the desk, not 30 students packed into 100 square metres of classroom.

But it is revealing of how entrepreneurs like Khan—and presumably the big tech boys at Microsoft and OpenAI who are supporting and enabling his bot—envisage AI being used in schools. Sal Khan’s son interacts with an iPad, at dad’s vocal prompting, to work out a mathematical problem, with the bot making encouraging noises and prompting Khan jr when he seems to be faltering.

Sal Khan’s video clearly illustrates how AI in classrooms means students in one-to-one dialogue with a portable device, a tablet or laptop, to work on very tightly constrained tasks. Khan himself has frequently talked up the idea of every student having a “Socratic tutor” (invoking Bloom’s 2-sigma achievement effect of 1:1 tutoring in a weird mashup of classical philosophy and debunked edu-stats).

Beyond the lofty Socratic rhetoric and cherrypicked evidence, however, it’s clearly a kind of pristine “showhome” demo rather than any indication whatsoever of how such an automated tutor could operate in the actual social context of a classroom. Marc Watkins sees it exemplifying a kind of automation of learning that is imagined by its promoters to be as “frictionless” as possible, based on a highly “transactional” view of learning.

“When you reduce education to a transactional relationship and start treating learning as a commodity”, Watkins argues, “you risk turning education into a customer-service problem for AI to solve instead of a public good for society”.

Oblong professors

AI tutors are a vision of the impending “oblongification” of education (if you can forgive yet another suffixification). In Kazuo Ishiguro’s novel Klara and the Sun, a minor feature is “screen professors” who deliver lessons via “oblongs”—these are instructors who appear on a child’s portable device to offer “oblong lessons” at a distance rather than in person, in a near future where home-schooling is the norm for many children. .

The oblong professors of the novel are embodied educators—one is described as perspiring heavily—but I found myself thinking of Ishiguro’s depiction of oblong professors while watching the Khan/OpenAI demo. Here, AI tutors appear to students from the oblong of a tablet or laptop—they are automated oblong professors that are imagined as always-available personal pedagogues.

Characterizing them as oblongs, after Ishiguro, rightly robs them of their promotional rhetoric. Oblong tutors aren’t “magic” or a “miracle” but mathematically defined flat 2D objects that can only operate in the idealized environment of a quiet studio space where every student has an oblong to hand.     

The Khan demo also arrived about the same time as Apple released a controversial advertisement for its new iPad. The ad, called “Crush!”, depicted all of human creativity and cultural production—musical instruments, books, art supplies, cameras—being squished into the “thinnest” iPad that Apple has ever made by a giant industrial vice. It’s a representation of the oblongification of culture itself, accurately (if inadvertently on Apple’s part) capturing the threat that many feel AI poses to any kind of cultural or knowledge production.

The ideal of the AI tutor is very similar to the Apple Crush! ad—it crushes teaching down into its flattest possible form, as a kind of transaction between the student and the tutor that can be modelled in a big computer. And enacted on an oblong.

The recent long paper released by Google DeepMind to support the LearnLM tutor similarly flattens teaching. The report aims to identify models of “good pedagogy” and use the relevant datasets for “fine-tuning” the Gemini-based tutor. Page 11 features a striking graphic, with the text caption:

Hypothetically all pedagogical behaviour can be visualised as a complex manifold lying within a high-dimensional space of all possible learning contexts (e.g. subject type, learner preferences) and pedagogical strategies and interventions.

The manifold image is a multidimensional (incomprehensible) representation of what it terms the “pedagogical value” of different “pedagogical behaviours”. In the same report the authors acknowledge that “we have not come even close to fully exploring the search space of optimal pedagogical strategies, let alone operationalising excellent pedagogy beyond the surface level into a prompt”.

Despite that, then they suggest using AI techniques of “fine-tuning” and “backpropagation to search the vast space of pedagogical possibilities” for “building high-quality gen AI tutors”. But this involved creating their own datasets since little data exists on good pedagogy, so it’s not even a model based on actual teaching.

The “ultimate goal may not be the creation of a new pedagogical model”, the Google DeepMind team writes, “but to enable future versions of Gemini to excel at pedagogy under the right circumstances”.

Despite the surface complexity of the report and its manifold graphic of good pedagogy, it still represents the oblongification of teaching insofar as it seeks to crush “optimal pedagogy” into a measurable model that can then be reproduced by Gemini. This is a model built from a small set of datasets constructed by the Google DeepMind team itself that it intends to place in schools, no doubt to compete with Khan/Microsoft/OpenAI.

But much about teaching and pedagogy remains outside of this flat model, and beyond the capacity of any tutor that can only interact with a student via the surface of an oblong device. Like Apple crushing culture into an iPad, Google has tried to crush good pedagogy into its device, except all it could find to put in the vice were some very limited datasets that it had created for itself.

Oblong students

As for the “humanizing” aspects of the AI tutorbots promoted by Microsoft and Google, it is worth considering what image of the “human” appears here. Their promo videos are full of humans, with a very purposeful emphasis on showing teachers interacting with students in physical classroom environments, unmediated by machines.

In a recent essay, Shannon Vallor has suggested that big AI companies and scientists have shifted conceptions of the “human” alongside their representations of “artificial general intelligence” (AGI). Vallor notes that OpenAI has recently redefined AGI as “highly autonomous systems that outperform humans at most economically valuable work”, which she argues “wipes anything that does not count as economically valuable work from the definition of intelligence”.

Such shifts, Vallor argues, not only narrow the definition of artificial intelligence, but reduce “the concept of human intelligence to what the markets will pay for”, treating humans as nothing more than “task machines executing computational scripts”. In the field of education, Vallor suggests, the “ideal of a humane process of moral and intellectual formation” is now overshadowed by AI imaginaries of “superhuman tutors” which position the student as “an underperforming machine”.  

Deficit assumptions of students as underperforming machines, which require prompting by AI to perform future economically valuable work, seem as odds with the rosy rhetoric of humanizing education with AI. AI tutors, as well as being oblongified teachers, also oblongify students—treating them as flattened-out, task-completing machines. Like iPads, but with fingers and eyes.   

Oblong education

My metaphorical labouring of the “oblong” as a model of education is a fairly light way of trying to illuminate some of the limitations and constraints of current approaches to AI in education. Most obviously, despite the rhetoric of transformation, all these AI tutors really seem to promise is a one-to-one transactional model of learning where the student interacts with a device.

It’s an approach that might work OK in the staged setting of a promo video recording studio, but is likely to run up hard against the reality of busy classrooms.

AI tutors are also just models that, as the Google DeepMind report illuminates, are highly constrained because there’s simply not good enough data to build an “optimal pedagogy” engine. And that’s before you even start assessing how well a language model like Gemini performs.

These limitations and constraints are important to consider as Microsoft and Google—among many many others—are now making concerted efforts to make flattened model teachers inside computers, then set them free in classrooms at significant scale.

Ishiguro’s notion of the “oblong professor” is useful because it helps to deflate all of the magical thinking that accompanies AI in education. It’s hard to get excited about an oblong.

Sure, AI might be useful for certain purposes, but a lot of the current promises could also lead to real problems that need serious consideration before activating autopedagogic tutors in classrooms. Currently, AI is being promoted to solve a huge range of complex issues in education.

But AI tutors are simplified models of the very complex, situated work of pedagogy. We shouldn’t expect so much from oblongs.

 

This blog post has been shared by permission from the author.
Readers wishing to comment on the content are encouraged to do so via the link to the original post.
Find the original post here:

The views expressed by the blogger are not necessarily those of NEPC.

Ben Williamson

Ben Williamson is a Chancellor’s Fellow at the Centre for Research in Digital Education and the Edinburgh Futures Institute at the University of Edinburgh. His&nb...