Skip to main content

Code Acts in Education: Google Magic

Google has announced the latest in a sequence of upgrades to its popular online learning platform Google Classroom, and it’s all about injecting more artificial intelligence into schools. Classroom was popular with teachers all around the world before Covid-19, but has experienced enormous growth since 2020. Google’s announcement of new AI functionalities reveals its continued plans for expansion across the education sector, and beyond, and its technical and economic power to reshape learning through automation.


The company publicized its new AI feature for Classroom, called Practice Sets, in a marketing blog with an explanatory video on the Google for Education site. The basic functionality of Practice Sets is powered by ‘adaptive learning technology’, which it said ‘will give teachers the time and tools to better support their students — from more interactive lessons to faster and more personal feedback.’

The marketing video is instructive of the particular imaginary of schooling that Google is seeking to operationalize with Practice Sets. The adaptive learning technology, it claims, will provide ‘one-to-one feedback,’ providing a ‘round the clock tutor for each student.’ It ‘identifies relevant learning skills’ while students are using the feature within Classroom assignments, then ‘finds relevant hints and resources’ which appear as ‘recommended video and content cards for each learning skill.’ Practice Sets also features ‘autograding’ functionality, giving teachers ‘visibility into their [students’] thinking,’ and ‘automated insights’ into ‘class-wide trends’ to provide a ‘quick view of class performance,’ as well as ‘actionable insights’ they can use to improve their teaching.     

In an accompanying blogpost outlining its approach to adaptive learning technology, the head of Google for Education said ‘applying recent AI advances’ to adaptive learning ‘opens up a whole new set of possibilities to transform the future of school into a personal learning experience.’ He added, ‘Adaptive learning technology saves teachers time and provides data to help them understand students’ learning processes and patterns.’

These marketing materials are presented in a highly persuasive way. They tap into contemporary problems of schooling such as providing adequate support and feedback to students. They also promote the idea that education is about ‘learning skills,’ resonant with contemporary education policy preoccupations with skills and their value.

But the Google marketing is also highly technosolutionist. It proposes that datafied forms of surveillance and automation are ideally suited to solving the problems of schooling. Google positions Practice Sets as a kind of always-on, on-demand classroom assistant, able to execute auto-pedagogical interventions depending on a constant trace of a student’s data signals. It’s likely just one step in Google’s roll-out of AI in its education platforms. As one favourable industry review suggested, ‘Now that Google is adding AI capabilities to Google Classroom, expect the search giant to add even more automation to its online learning platform going forward.’

The appearance of Practice Sets exemplifies the ways automation is becoming an increasing presence in schools. Neil Selwyn, Thomas Hillman, Annika Bergviken Rensfeldt and Carlo Perrotta argue that automation is not materializing in the spectacular guise of robot educators, but as much more mundane services and feature upgrades.

Education — as with most areas of contemporary life — is becoming steadily infused with small acts of technology-based automation. These automations are intrinsic to the software, apps, systems, platforms and digital devices that pervade contemporary education. … [W]e are now teaching, studying and working in highly-automated and digitally directed educational environments.

While Practice Sets certainly fits the mould of a mundane instance of micro-automation, its significance is its potential scale. Indeed, Google claimed it could achieve the promise of adaptive learning ‘at unprecedented scale.’ This is because Google Classroom has penetrated into hundreds of thousands of classrooms worldwide, reaching more than 150 million students.

Practice Sets is currently available only in beta, but will in months to come be rolled out as a feature upgrade to premium Workspace customers. A year ago, Google said it was reframing Classroom as a learning management system, not just a platform for learning from home during Covid disruptions, and set out a ‘roadmap’ for its future development. Practice Sets represents the latest step on that roadmap, one that involves becoming the global learning management infrastructure for schools and increasing the presence of autopedagogical assistants in the school classroom, with potential for intervention with millions of students.

Its ambitions for scalability exceed the Classroom alone, however. It is also positioning itself as a ‘learning company,’ dedicated to ‘Helping everyone in the world learn anything in the world’, whether at school, at work, or in everyday life itself. It’s not a huge jump to assume that Google’s expansive ambitions as a learning company will gradually see it extend automation well beyond the school.


What is striking about the Practice Sets announcement is the way it presents adaptive learning technology. For teachers, it will ‘supercharge teaching content,’ and ‘when students receive individualized, in-the-moment support, the results can be magical.’ Early users of the beta service are reported to have described Practice Sets as ‘Google magic.’ This emphasis on magical results and Google magic is typical of technology marketing discourse. (As an aside Google Magic is also the name of its Gmail spam filter.)

As MC Elish and danah boyd have previously argued, ‘magic’ is frequently invoked in tech marketing materials, especially in relation to AI, while minimizing attention to the methods, human labour and resources required to produce a particular effect:

When AI proponents and businesses produce artefacts and performances to trigger cultural imaginaries in their effort to generate a market and research framework for the advancement of AI, they are justifying and enabling a cultural logic that prioritizes what is technically feasible over what is socially desirable. … [W]hat’s typically at stake in the hype surrounding Big Data and AI is not the pursuit of knowledge, but the potential to achieve a performative celebration of snake oil.

AI is of course not magic. The discourse and imaginary of magical AI obscures the complex social, economic, technical and political efforts involved in its development, hides its internal mechanisms, and disguises the wider effects of such systems.

Invoking ‘Google magic’ therefore erases the underlying business interests of Google in education and the internal dynamics of its data-driven AI developments. Google has a distinctive business model, which is imprinted on everything it does, including educational platforms like Classroom. It’s a business plan that is laser-focused on generating value from data and applying automation and AI across all its activities.

And it’s a controversial business model to apply to education. Dan Krutka, Ryan Smits and Troy Willhelm argue that ‘Google extracts personal data from students, skirts laws intended to protect them, targets them for profits, obfuscates the company’s intent in their Terms of Service, recommends harmful information, and distorts students’ knowledge.’

Google magic also obscures the underlying technical functionality of Practice Sets. One clue appears in the marketing materials: ‘With recent AI advances in language models and video understanding, we can now apply adaptive learning technology to almost any type of class assignment or lesson.’ At last year’s Google developer conference, CEO Sundar Pichai trailled the company’s plans to employ language models in education. These ambitions would appear to extend far beyond the functionality of Practice Sets, to include artificial conversational agents capable of conducting real-time dialogue with students.

The presentation was a glossy marketing event, but behind the scenes huge Google teams are working on language models with proposed applications in sectors like education. So-called Google magic is actually the complex social, technical and economic accomplishment of lavishly-funded R&D teams who are exploring how to do things such as ‘quantify factuality’ in order to automatically recommend students information from trusted sources for their studies.

The idea of Google magic also hides the internal political conflicts that have characterized Google’s recent efforts to build large language models. Language models are some of the most contentious, ethically problematic AI projects that Google has pinned its future business prospects on. Language models are trained in huge datasets compiled from the web. They can therefore contain social biases and amplify discrimination, and are also extremely energy-intensive and potentially environmentally damaging. Google however has been mired in controversy since firing two of its prominent AI ethics researchers for highlighting these problems.

Google magic can obscure other things too. It evades discussions about whether global technology companies should possess power and authority to reshape the work of teachers and the experiences of students, at unprecedented scale. It turns political problems about schooling into seemingly simple technological solutions that tech companies are best placed to administer.

Invoking Google magic places out of sight any debate about the social or collective purposes of education, assuming that individualized tuition and efficient knowledge acquisition supported by automation is the ideal pedagogic mode. It also forecloses the possibility of other forms of information retrieval and discovery, and asserts Google as an authoritative cultural gatekeeper of knowledge.

Google magic distracts attention from the complex privacy policies and user agreements that determine how Google can extract and use the vast troves of user data generated from every click on the interface. It disguises the complex thickets of algorithms that make decisions about individual students, characterizing them simply as innocent and objective ways of delivering ‘factuality’ to students.

The idea of AI technology as magic also hides the fact that automation constitutes a profound experiment in how schools operate, with potentially serious effects on teacher labour and student learning and development that remain unknown. It may even draw a curtain across persistent questions about the legality of its commercial data mining operations in education.


Google has of course produced a lot of useful technical services, many of which may be welcome in schools. Whether AI adaptive learning will actually work in schools, or if teachers will want to use it, remains to be seen. A sceptical perspective is best taken in relation to technomagical marketing claims of as-yet unproven technology interventions.

But a critical perspective is necessary too, one that draws attention to the political role of Google as a significant and growing source of influence over schooling. It has extraordinary power to affect what happens in classrooms, to monitor children through the interface, to intervene in the labour of teachers, to generate class performance data, to select content and knowledge, and to introduce AI and automation into schools through mundane feature upgrades. It markets its services as ‘magical’ but they are really technical, social, economic, and political interventions into education systems at enormous scale around the globe.

As my colleague John Potter pointed out in response to the Google marketing, magic often refers to the ‘skill of misdirection,’ a certain sleight of hand that indicates to the audience to ‘Look at this magical stuff over here… (but don’t look at what’s happening over there).’ But ‘over there’ is precisely where educational attention needs to be directed, at the technical things, even the boring things like privacy policies and user agreements, that are reshaping teaching and learning in schools. Google may be opening up exciting new directions for schooling, but it may also be misdirecting education towards a future of ever-increasing automation and corporate control of the classroom.

This blog post has been shared by permission from the author.
Readers wishing to comment on the content are encouraged to do so via the link to the original post.
Find the original post here:

The views expressed by the blogger are not necessarily those of NEPC.

Ben Williamson

Ben Williamson is a Chancellor’s Fellow at the Centre for Research in Digital Education and the Edinburgh Futures Institute at the University of Edinburgh. His&nb...