Skip to main content

First Fish Chronicles: EdTech Companies are Surveillance Companies. (Guest Post by Andrew Liddell)

How persuasively designed tech entered the classroom and why it matters.

A guest post by Andrew Liddell, co-founder of The EdTech Law Center.

Note from Emily: I’ve had the privilege of knowing Andy Liddell since 2019, where our aligned concerns around screen-based technology use in schools merged through our volunteer work with Fairplay. (Today, Andy and I serve as co-chairs of Fairplay’s Screens in Schools Action Network group.) Over the years, Andy and I have worked together on several efforts to explore the efficacy, safety, and legality of issues relating to EdTech product use in education. Andy was part of the group who addressed members of Parliament with me and the following is his speech from that event on Monday, November 24, 2025.

Andrew Liddell, Emily Cherkin, and Julie Liddell

EdTech, as we currently know it, will invariably harm children and diminish their academic performance. That is because EdTech, as we currently know it, is designed to serve the business model of the modern internet, in which the needs of the human being on the other side of the screen are subordinated to the desires of those who would pay for information about that person.

We did not get here overnight, and we did not get here by accident.

The story of EdTech is the story of the internet, and how the internet transformed since the ’90s from a distributed, publicly funded means of sharing knowledge to a closed, corporately owned means of observing and manipulating human behavior. EdTech must be considered as an ecosystem, rather than as any one device or program in isolation. But one company, more than any other, is responsible for the transformation of the internet beginning in 2001. It’s the same company responsible for the transformation of the classroom that began a decade later.

In her 2019 book The Age of Surveillance Capitalism, Harvard professor Shoshanna Zuboff explains how, around 2001, Google invented a new way of doing business online. Then as now, web companies collected reams of information about their users as they browsed the internet, only at that time, these so-called metadata were seen as such useless byproducts they were deemed “digital exhaust.”

But like the clever engineers who discovered how to capture natural gas at an oil well rather than just burn it off, Google’s engineers figured out a way to refine this exhaust into a valuable commodity. Eventually, industry followed suit, such that every internet company, and many companies in the real world, benefit to at least some extent from the data trade. This includes EdTech companies.

Phones, computers, and other networked devices record everything a person does online and off. That information is fed into sophisticated algorithms, allowing companies to predict what that person is going to do next, influence their behavior to make those predictions more likely to come true, and then sell those predictions to others looking to benefit from that knowledge.

“Phones, computers, and other networked devices record everything a person does online and off. That information is fed into sophisticated algorithms, allowing companies to predict what that person is going to do next, influence their behavior to make those predictions more likely to come true, and then sell those predictions to others looking to benefit from that knowledge.”

Under this new model, companies were able to offer products more cheaply, or even for free, because the human beings using those products paid with their time and attention. The real customers were those who paid money for the predictions and the power to influence users.

As Google grew, the surveillance business model began to crowd out other potential ways of making money online. Investment in alternative business models dried up. Success was now measured in user metrics, not revenue. And thus every company, to at least some degree, became a surveillance company. EdTech companies are no exception.

As this model took hold, companies ceased to design their products solely for the benefit of the human user and instead began to design to maximize the user’s time with the company’s product. User time is company money: the more time a user spends with a product, the more data is generated, the better the predictions become, and the greater the opportunity to reach the user. Design briefs shifted from, “How do we make this product fun, interesting, and useful to the user, so that they choose to come back?” to “How do we coerce the user to spend as much time as possible with this product?”

Persuasive design was the answer.

Sitting at the intersection of behavioral psychology and computer science, persuasive design is the discipline of using a computer to influence the behavior of a user for the benefit of someone else. These techniques are now familiar to us: the endless scroll; autoplay; algorithmic feeds that push content that you don’t choose; daily streaks; and virtual rewards; all intentionally deployed to keep us fully immersed, scrolling and clicking, while we lose track of time and forget the thing we set out to do. It’s what Big Tech euphemistically calls “engagement,” and it is the only measure of success in the data economy. It’s “engagement,” Vegas-style. Across a large enough population, persuasive design techniques can influence the behavior of a meaningful number of users in favor of tech companies and their paying customers.

Collectively, tech companies employ thousands of experts in persuasive design. Their dream? To keep users on their platforms forever, pointing their attention toward whatever is most profitable.

The father of persuasive design, a Stanford professor named B.J. Fogg, observed in the late 1990s that a computer could be three things: a tool, a medium, or a persuasive agent. A tool; something necessary that I use to accomplish a task. A medium; a way to exchange information between me and other people. And a persuasive agent; a way to manipulate my behavior for the benefit of someone else.

In 1998, this was radical stuff: computers were huge, the internet was slow, and hardly anyone was online. It was hard to imagine that a computer could be used to change human behavior, but Fogg’s experiments showed it was possible. Over the next decade, as computers shrank to the size of a smartphone, the internet sped up, and everyone logged on, persuasive design features became embedded in every internet platform that monetized user attention and information.

The mix of tool, media, and persuasive agent changed as well.

In the beginning, computers were pure tools, room-sized behemoths for making massive-scale calculations for military and business planning. When I entered school in 1989, personal computers were a mix of tool and medium—think word processors and electronic encyclopedias—which nevertheless centered the needs of the user. But today, even though they are marketed only as tools and physically resemble the laptops of decades past, school computers are mostly persuasive agent, some media, and very little tool.

That’s because all EdTech exists in the larger environment of attention-mining through the internet. A Google Chromebook is just an internet browser wrapped in a hardware package. When we give kids EdTech, we’re really just putting them online, insisting that they stay focused and make good choices in an environment intentionally designed to redirect their focus and make choices for them.

“A Google Chromebook is just an internet browser wrapped in a hardware package. When we give kids EdTech, we’re really just putting them online, insisting that they stay focused and make good choices in an environment intentionally designed to redirect their focus and make choices for them.”

What students experience as distraction, tech companies experience as profit.

The EdTech revolution promised that, by giving every student a Chromebook or an iPad, they would thrive. They’d become better digital citizens, develop “21st century skills,” and receive instruction specially tailored just for them. According to those who sold this vision, anything else would be inequitable: the future was digital, the world was going online, and we risked leaving out marginalized children unless they had the same access to the internet as everyone else did. Schools, they said, could level the playing field, ushering in a new age of knowledge and equality.

These gauzy promises are impossible to quantify. But by the traditional metrics used to evaluate educational success, the revolution has failed. Beginning in 2012, when so-called 1:1 programs begin to go mainstream in the US, the steady gains that students had made in math, science, and reading over the previous four decades began to rapidly reverse. By 2025, scores had fallen to levels not seen since the ‘70s, erasing more than fifty years of progress. Youth mental health is worse now than it has been in generations. And perhaps surprisingly, given that kids now spend most of their day on computers, they are even less computer literate today than they were just ten years ago.

“It is surprising, given that kids now spend most of their day on computers, that they are even less computer literate today than they were just ten years ago.”

When companies can make money by pointing user attention where it is most profitable for them and nothing is stopping them, they will—child welfare be damned. Sustained attention is a necessary condition for learning. When our kids are steeped in a digital environment that fractures and redirects their attention for the benefit of others, falling performance is not only unsurprising, it’s entirely expected.

Excerpt of Andy delivering this speech to Parliament, Nov. 24, 2025.


If you have a concern related to your child’s use of EdTech products in school, please contact Andy and Julie at EdTech Law Center.

To view my speech to Parliament, visit this link.

 

This blog post has been shared by permission from the author.
Readers wishing to comment on the content are encouraged to do so via the link to the original post.
Find the original post here:

The views expressed by the blogger are not necessarily those of NEPC.

Emily Cherkin

Emily Cherkin, M.Ed., is a nationally recognized consultant who takes a tech-intentional approach to addressing screentime challenges. A former middle school teac...
,

Andrew Liddell

Andrew Liddell is a career federal courts litigator and technology attorney, and the co-founder of the EdTech Law Center, a consumer protection law firm that help...