Skip to main content

Code Acts in Education: EdTech Resistance

Prepared for EdTech KnowHow conference, Stavanger, Norway, 26 September 2019

One month ago I set a Twitter mob against a team of young researchers working on a new education technology prototype at a major university in the United States.

Here’s how I did it.

One of my current research interests is in how technical advances in brain science and human genetics are leading to new ways of understanding learning and education. So I’m gathering a lot of material together from companies and from research labs to scope out the state of the art in the science of neurotechnology and bioinformatics.

MIT AttentivU

The MIT Media Lab project AttentivU

 

That’s how I came across this prototype MIT Media Lab project called AttentivU. It’s building a pair of wearable, ‘socially acceptable’ glasses with in-built electroencepholagram (EEG) detectors that can ‘sense’ from brainwave signals escaping the skull when a student is losing attention. The glasses then emit ‘bone-conducted sound’ to ‘nudge’ the student to pay attention.

Having written at length about the potential effects of neurotechnology in education before, personally I thought these seemed  potentially very concerning, and definitely worth posting on to Twitter.

MIT AttentivU tweet

Tweet on AttentivU triggered accusations of ‘eugenics’ and ‘torture’

 

‘Check out these cool brain-reading glasses from MIT Media Lab’, I tweeted. In retrospect I should have put scare quotes around ‘cool’, even though I thought they self-evidently were not–I mean, look at them!

By that evening my Twitter notifications were buzzing constantly with outrage. These glasses were not ‘cool’, but a ‘torture device’ from Stanley Kubrick’s film of A Clockwork Orange, especially for neurodiverse populations and young people labelled with attention deficit and hyperactivity disorder.

By the next morning, I was being called a ‘eugenicist’ and ‘human garbage’. Some people thought it was my project; most thought I was amplifying it. Doubtless the sense of outrage was pumped high because of the Media Lab’s association with Jeffrey Epstein.

Others recognized it was in fact the project of a team of young postdoctoral researchers. Two days after I posted the original tweet I started seeing a steady stream of tweets from them clarifying its aims and scope. Twitter outrage had found them and demanded they shut down the project.

techlash books

The ‘techlash’ is reflected in critical books on the social and political consequences of technology

 

The ‘techlash’

Now I still don’t like these brain goggles very much—the criticisms on Twitter reflected my own critical views about targeting students for automated ‘nudges’ to the skull based on simplified brainwave readings. I don’t much like the way Twitter turned this into a  ‘torture device’ either–I think we need to read these innovations more closely and develop more careful critiques.

But it has been educational for me to be on the harsh end of what I see as a recent and emerging trend—edtech resistance and pushback. Twitter outrage is its most extreme expression–but there are also good reasons to pay attention to edtech pushback.

Edtech pushback is our sector’s symptom of a much wider public backlash against ‘big tech’—or a ‘techlash’ as some are calling it.

By now we recognize how exploitation of user data for targeted advertising, online misinformation, social media bots, ‘deepfakes’ and so on, have got us to a situation where, some argue, democracy has been hacked and modern capitalism has come to depend on surveillance and behaviour control.

The techlash is a response from the public, the media, the charity sector, even some government ministers and policymakers to these data controversies. In some cases is even leading to large commercial fines, government committee summonses, and calls for much greater tech regulation.

edtechlash

News media has begun to report critically on edtech

 

Edtech resistance, or perhaps an ‘edtechlash’, is also gathering strength. Anyone developing, researching, or teaching with educational technologies should be paying attention to it–not least because news journalists are increasingly reporting on controversial edtech-related stories.

There are some common resistance themes emerging—such as edtech privacy, security, and data protection; concerns over artificial intelligence in schools; and the role of multinational commercial companies.

In this talk I want to raise some specific edtech resistance examples and take from these a few key lessons. As a university researcher I’m just trying to document the steady build-up of edtech pushback. For those in the edtech industry this resistance should be informing your thinking as you look to the next decade of product development, and for educators or decision-makers, these tensions should be in mind when thinking about the kinds of education systems and practices you want to develop for the future.

EdTech activists

First up, I think anyone involved in making or using edtech needs to be paying close attention to a growing number of ‘anti-edtech activists’—journalists, educators, parents, or simply concerned members of the public who feel moved to challenge the current direction of edtech development.

These activists are doing their own forensic research into edtech, its links to commercial interests, and the critical issues it raises regarding privacy, private sector influence over public education, and the challenges that are emerging for educators and students. The work of Audrey Watters at Hack Education is exemplary on these points.

Hack Education

Audrey Watters’ Hack Education site is a popular source of critical edtech commentary

 

These anti-edtech activists are actively disseminating their arguments via blogging and social media, and gaining public attention. Charitable groups focused on children’s digital rights are moving to a more activist mode regarding edtech too. DefendDigitalMe and the 5Rights group in the UK are already exploring the legal, ethical and regulatory challenges of technologies that collect and process student data.

The lesson we can take here is that activists are increasingly expressing outrage over private exploitation of public education and students’ personal data. Look what happened when data privacy activists got organized against the Gates Foundation’s $100million inBloom platform for educational data-sharing, learning apps and curricula in 2013–it collapsed within a year of launch amid growing public alarm over personal data exploitation and misuse.  Monica Bulger and colleagues commented, 

The beginnings of a national awareness of the volume of personal data generated by everyday use of credit cards, digital devices, and the internet were coupled with emerging fears and uncertainty. The inBloom initiative also contended with a history of school data used as punitive measures of education reform rather than constructive resources for teachers and students. InBloom therefore served as an unfortunate test case for emerging concerns about data privacy coupled with entrenched suspicion of education data and reform.

Diversity challenges

Then there’s the FemEdTech movement, mostly consisting of academics, edtech software developers, and STEM education ambassadors who, inspired by feminist theory and activism, are pushing greater representation and involvement of women and other excluded and disadvantaged groups in both the development of and critical scholarship on educational technologies.

femedtech_white-1024x341

The FemEdTech network challenges the lack of diversity in the edtech sector

 

The FemEdTech network  is:

alive to the specific ways that technology and education are gendered, and to how injustices and inequalities play out in these spaces (which are also industries, corporations, and institutions). We also want to celebrate and extend the opportunities offered by education in/and/with technology – to women, and to all people who might otherwise be disadvantaged or excluded.

The lesson I take from FemEdTech is that industry needs to act on the lack of diversity in the edtech sector, and educators need to be more aware of the potentially ‘gendered’ and ‘racialized’ nature of edtech software. We already know that education serves to reproduce inequalities and disadvantages of many kinds–the risk is that edtech worsens it. It might be claimed, for example, that the model of ‘personalized learning’ favoured by the edtech sector reflects  the mythology of the self-taught white male programmer. The introduction of computer science and programming in the National Curriculum in England has failed to appeal to girls or children from poorer backgrounds, with the result that England now has fewer girls than ever studying a computer-based subject–not a great way to build up diversity in STEM areas or in the technology workforce.

Student protests

Probably one of the most publicized acts of edtech resistance in the last year or so were the series of student walkouts and parent protests at the Mark Zuckerberg-funded Summit Schools charter chain in the US last year. Personalized learning through adaptive technology is at the core of the Summit approach, using a platform built with engineering assistance from Facebook.

As students from New York wrote in a public letter to Zuckerberg, they were deeply concerned about exploitation of their personal data, and the possibility of it being shared with third parties, but also rejected the model of computer-based, individualized learning which, they claimed, was boring, easy to cheat, failed to prepare them for assessments, and eliminated the ‘human interaction, teacher support, and discussion and debate with our peers that we need in order to improve our critical thinking’.

Summit news coverage

Student and parent protests about Summit Schools generated newspaper headlines

 

There were controversies too about the curriculum content in the Summit Personalized Learning Platform—students in some cases were being pointed to the UK tabloid the Daily Mail that reportedly ‘showed racy ads with bikini-clad women’. Reports surfaced of Summit curriculum developers working at such speed to create content for the platform that they barely had time to check the adequacy of the sources.

Our lesson from this is about students’ distrust in engineering solutions to schooling.  Personalized learning appears as an ‘efficiency’ model of education, using opaque technologies to streamline students’ progress through school while shaving off all the interactions and space for thinking that students need to engage meaningfully with knowledge and develop lasting understanding. Zuckerberg is now providing the funding to enable the Summit platform to roll out across US schools, through the new non-profit Teachers, Learning & Partners in Education. For educators this raises important questions about whether we want technology-based models like this at the centre of our curricula and pedagogies–because this is what’s coming, and it’s being pushed hard by the tech sector with huge financial resources to help it succeed.

Investor scepticism

Edtech resistance comes not only from activists and students, but sometimes from within its own industry.

Many of you will know AltSchool, the ‘startup charter school chain’ launched by ex-Googler Max Ventilla which quickly attracted almost $174million investment in venture capital funding and then almost as quickly ‘pivoted’ to reveal its main business model was not running schools after all but product testing a personalized learning platform for release to the wider schools market.

There has been strong resistance to AltSchool throughout its short lifecycle. It’s been seen as a template for ‘surveillance schooling’, treating its young student as ‘guinea pigs’ in a live personalized learning experiment. It even called its key sites ‘lab schools’.

Altschool tweet

A critical tweet triggered a venture capitalist backlash

 

Earlier this summer, though,  resistance came from venture capitalist edtech investor Jason Palmer, who claimed AltSchool had always been a terrible idea especially as, he tweeted, ‘edtech  is all about partnering w/existing districts, schools and educators (not just “product”)’.

And that tweet, in turn, attracted a torrent of criticism from other technology investors who accused Palmer of ‘toxic behaviour’ and made fairly aggressive threats about his future prospects in edtech investment.

When the New York Times ran a piece on this a couple of weeks ago, it focused on the ‘Silicon Valley positivity machine’—a kind of secret code of upbeat marketing that refuses to publicly engage with failure or even in critical debates about the social consequences of technical innovation. AltSchool has now announced it is rebranding as Altitude and will sell to schools the personalized learning product it’s been engineering and testing in its experimental lab school settings for several years.

If there is any lesson to learn here, it’s not just that edtech is about partnerships rather than product. It’s that the edtech industry needs to wake up to critical debate about its ideas and products, and that educators and activists push back against bad ideas and evidence from failed experiments–otherwise they’ll just happen again, under a different brand name. As Audrey Watters commented:

Jason Palmer was absolutely right. AltSchool was a terrible idea. It was obviously a bad investment. Its founder had no idea how to design or run a school. He had no experience in education — just connections to a powerful network of investors who similarly had no damn clue and wouldn’t have known the right questions to ask if someone printed them out in cheery, bubble-balloon lettering. It’s offensive that AltSchool raised almost $175 million.

Without this kind of critical engagement and proper reflective engagement with failure and bad ideas, the danger is that even more intrusive forms of surveillance and monitoring–powered by the techno-optimism and hype of the tech sector positivity machine–become normalized and rolled out across schools and colleges.

Regulation

And of course data-based surveillance has become perhaps the most critical issue in contemporary education technology. One high-profile case is the high school in Sweden that was fined under GDPR rules just last month for the unlawful introduction of facial detection to document student attendance.

The high school board claimed that the data was consensually collected, but the Swedish Data Protection Authority found that it was still unlawful to gather and process the students’ biometric data ‘given the clear imbalance between the data subject and the controller’.

Sweden facial recognition ban

Sweden has issued a major GDPR fine for trials of facial recognition in a school

 

Sweden has now moved to ban facial recognition in education outright, and the case is catalyzing efforts within the European Union to impose ‘strict limits on the use of facial recognition technology in an attempt to stamp out creeping public surveillance of European citizens … as part of an overhaul in the way Europe regulates artificial intelligence’.

This example shows us growing legal and regulatory resistance to intrusive and invasive surveillance. In fact, with its core emphasis on power imbalances regarding ‘consent’ the case could raise wider debates about students’ rights to ‘opt-out’ of the very technological systems that their schools and colleges now depend on. It also raises the issue that schools themselves might bear the financial burden of GDPR fines if the technologies they buy breach its rules.

Flawed algorithms

Students’, educators’ and regulators’ critical resistance to edtech is likely to grow as we learn more about the ways it works, how it treats data, and in some cases how dysfunctional it is.

Just this summer, an investigation of automated essay-grading technology found it disproportionately discriminates against certain groups of students. This is because:

Essay-scoring engines don’t actually analyze the quality of writing. They’re trained on sets of hundreds of example essays to recognize patterns that correlate with higher or lower human-assigned grades. They then predict what score a human would assign an essay, based on those patterns.

The developers of the software in question openly acknowledged that this was a problem going back 20 years of product development. Each time they tweak it, different groups of students end up disadvantaged. There is systematic and irremediable bias in the essay scoring software.

The examination of essay scoring engines also included the finding that these technologies would give good grades to ‘well-structured gibberish’. The algorithms can’t tell between genuine student insight and meaningless sentences strung together in ways that resembled well-written English.

Increasingly, journalists are on to edtech, and are feeding into the growing sense of frustration and resistance by demonstrating these technologies don’t even fairly do what they claim to do. These investigations teach us to be dubious of claims of algorithmic accuracy used to promote new AI-based edtech products. We shouldn’t presume algorithms do a better job than educators, but insist on forensic, independent and impartial studies of their intended outcomes and unintended effects. Cases like this force educators to confront new technologies with scepticism. In the name of educational innovation, or efficiency, are we ceding responsibility to algorithms that neither care nor even do their job effectively?

Political algorithms

But edtech flaws and resistance can get even more serious.

Five years ago, the UK government Home Office launched an investigation into claims of systematic cheating in English language tests for international students. The assessment developer, English Testing Services (ETS), were called in to do a biometric voice-matching analysis of 66,500 spoken test recordings to determine if candidates had cheated in the test by getting someone else to take it for them.

Its finding was that 58% had cheated by employing a proxy test-taker, and a further 39% were questionable. Over 33,000 students had their visas revoked. More than 2,500 have  been forcibly deported, while another 7,000 left voluntarily after being told they faced detention and removal if they stayed. In all, it is believed that over 10,000 students left the country as a result of the test.

But a later investigation found the voice-matching algorithm may have been wrong in up to 20% of cases. Thousands of international students were wrongly accused of cheating, wrongly had their visas revoked, and were wrongly ordered to leave the country. Multiple news outlets picked up the story as evidence of problematic governmental reliance on algorithmic systems.

In response to the emerging scandal, the UK’s official investigative organization, the National Audit Office, conducted an investigation earlier this year, and the whole fiasco has become a political scandal–result now is that literally thousands of court cases are proceeding against the Home Office. 12,500 appeals have already been heard and over 3000 have won. According to the National Audit Office investigation:

It is difficult to estimate accurately how many innocent people may have been wrongly identified as cheating. Voice recognition technology is new, and it had not been used before with TOEIC tests. The degree of error is difficult to determine accurately because there was no piloting or control group established for TOEIC tests.

Since then a parliamentary inquiry was even launched. One properly shocking part of this is that the Home Office has spent over £21million dealing with the fallout, while ETS has made an estimated £11.4million, and only £1.6million has been reclaimed for the taxpayer. More shocking than that, individual students themselves are reported to be paying many thousands of pounds to have their appeal heard–with many others unable to afford it. The inquiry reported its findings last week, heavily criticizing the Home Office for rushing ‘to penalise students without establishing whether ETS was involved in fraud or if it had reliable evidence of people cheating’.

What lessons can we draw from this? This not just a case of resistance to educational technologies. It is a shocking example of how untested software can have huge consequences for people’s lives. It’s about how those consequences can lead to court cases with massive cost implications for individuals. It’s about the cost to the public of government outsourcing to private contractors. It’s about the outsourcing of human expertise and sensitivity to the mechanical efficiency of algorithms.

It also teaches us that technology is not neutral. The deployment of this voice matching software was loaded with politics—the voice matching algorithm reproduced UK government ‘hostile environment’ policy by efficiently optimizing the deportation process.

Body contact

So finally, what can we learn from the edtech pushback I experienced first-hand on Twitter in relation to the Media Lab’s brain glasses?

Here we can see how proposed experiments on students’ bodies and brains can generate extremely strong reactions. In the last few years, interest in brain science, wearable biometrics and even genetic testing in education has grown substantially.

Experiments are underway with wearable neural interfaces to detect brainwave signals of student attention, and studies are being conducted in behavioural genetics that could in coming years bring about the possibility of DNA testing young children for future achievement, attainment and intelligence.

The potential here, according to behavioural geneticists, is to personalize education around a student’s genetic scores and associated predictions. Maybe consumer genetics companies, like 23andMe, will move to create a bio-edtech market, just as educational neuroscience companies  are already creating a new neuro-edtech market. One educational neurotechnology company, BrainCo, just announced a partnership with the edtech company Progrentis on a ‘fully neuro-optimized education platform’ combining brainwave reading with personalized, adaptive learning technologies.

BrainCo Progrentis

BrainCo and Progrentis have partnered to create a ‘neuro-optimised education platform’

 

We’re moving into deeply controversial and ethically grey area here. No wonder Twitter exploded on me with accusations of eugenics and forcible mental manipulation when I shared MIT Media Lab’s brain glasses.

These new educational developments in brain technologies and genetics raise huge ethical challenges which must be resolved before these innovations are rolled out—if not stop them in their tracks, as Sweden has moved to do in relation to facial recognition in education. Bioethicists and scientists themselves are increasingly calling for new human rights amendments to protect the human body and the brain from intrusion and extraction in all but necessary medical cases. The UK’s Royal Society just launched a report on the need for regulation of neurotechnology as developments in neural interfaces accelerate, with unknown consequences for human life itself. Yet in education we’re not having these discussions at all–and the result is more and more projects like MIT’s brain glasses, which treat education as an experimental playground for all sorts of potentially outrageous technological innovations.

Conclusion

So, there is a rising wave of edtech resistance from a wide variety of perspectives—from activists to students, journalists to regulators, and legal experts to ethicists.

If these are signals of an emerging edtechlash, then educators, decision-makers and the edtech industry would benefit from being engaged in the key issues that are now emerging, namely that:

  • private sector influence and outsourcing is perceived to be detrimental to public education
  • lack of edtech diversity may reproduce the pedagogic assumptions of engineers
  • student distrust of engineering solutions and continuing trust in human interactions as central to education
  • there may be bad science behind positive industry and investor PR
  • new data protection regulations question how easily student ‘consent’ can be assumed when the balance of power is unequal
  • algorithmic ‘accuracy’ is being exposed as deeply flawed and full of biases
  • algorithmic flaws can lead to devastating consequences at huge costs to individuals, the public, and institutions
  • increasingly invasive surveillance proposals raise new ethical and human rights issues that are likely to be acted upon in coming years.

We should not and cannot ignore these tensions and challenges. They are early signals of resistance ahead for edtech which need to be engaged with before they turn to public outrage. By paying attention to and acting on edtech resistances it may be possible to create education systems, curricula and practices that are fair and trustworthy. It is important not to allow edtech resistance to metamorphose into resistance to education itself.

This blog post has been shared by permission from the author.
Readers wishing to comment on the content are encouraged to do so via the link to the original post.
Find the original post here:

The views expressed by the blogger are not necessarily those of NEPC.

Ben Williamson

Ben Williamson is a Chancellor’s Fellow at the Centre for Research in Digital Education and the Edinburgh Futures Institute at the University of Edinburgh. His&nb...