Skip to main content

Larry Cuban on School Reform and Classroom Practice: Will Chatbots Teach Your Children? (Natasha Singer) (Guest Post by Natasha Singer)

Isn’t that a catchy headline?

The New York Times published an article with that title recently that may have caught the attention of readers whose job is to teach children and youth. In a world where Artificial Intelligence seemingly reigns, the idea that chatbots (introduced in November 2022) might end up tutoring children at home with public school and college teachers losing their jobs, depends on such eye-catching headlines.

Moreover, the article suggests that the dream educational policymakers and researchers have pursued for well over a century to customize teaching to each student’s way of learning is on the cusp of being fulfilled. Philosophers, educational leaders, and many teachers have had this dream of individualizing instruction for more than a century. To many technophiles, chatbots may be tools that will make that dream happen. Or at least, Silicon Valley entrepreneur, Sal Khan, CEO of Khan Academy predicted recently.

Maybe. But I think not.

No device, be it a chatbot or laptop computer, will “revolutionize” schooling or ease teachers out of U. S. classrooms for one simple reason. Public schools seek to achieve many goals beyond students acquiring knowledge and skills (see here and here). And no Silicon Valley entrepreneur’s hype—for that is what it is—will alter that.

Sal Khan, the chief executive of Khan Academy, gave a rousing TED Talk last spring in which he predicted that A.I. chatbots would soon revolutionize education.

“We’re at the cusp of using A.I. for probably the biggest positive transformation that education has ever seen,” Mr. Khan, whose nonprofit education group has provided online lessons for millions of students, declared. “And the way we’re going to do that is by giving every student on the planet an artificially intelligent but amazing personal tutor.”

Videos of Mr. Khan’s tutoring bot talk amassed millions of views. Soon, prominent tech executives, including Sundar Pichai, Google’s chief executive, began issuing similar education predictions.

“I think over time we can give every child in the world and every person in the world — regardless of where they are and where they come from — access to the most powerful A.I. tutor,” Mr. Pichai said on a Harvard Business Review podcast a few weeks after Mr. Khan’s talk. (Google introduced an A.I. chatbot called Bard last year. It has also donated more than $10 million to Khan Academy.)

 

Mr. Khan’s vision of tutoring bots tapped into a decades-old Silicon Valley dream: automated teaching platforms that instantly customize lessons for each student. Proponents argue that developing such systems would help close achievement gaps in schools by delivering relevant, individualized instruction to children faster and more efficiently than human teachers ever could.

In pursuit of such ideals, tech companies and philanthropists over the years have urged schools to purchase a laptop for each child, championed video tutorial platforms and financed learning apps that customize students’ lessons. Some online math and literacy interventions have reported positive effects. But many education technology efforts have not proved to significantly close academic achievement gaps or improve student results like high school graduation rates.

Now the spread of generative A.I. tools like ChatGPT, which can give answers to biology questions and manufacture human-sounding book reports, is renewing enthusiasm for automated instruction — even as critics warn that there is not yet evidence to support the notion that tutoring bots will transform education for the better.

Online learning platforms like Khan Academy and Duolingo have introduced A.I. chatbot tutors based on GPT-4. That is a large language model, developed by OpenAI, which is trained on huge databases of texts and can generate answers in response to user prompts.

Mr. Khan’s vision of tutoring bots tapped into a decades-old Silicon Valley dream: automated teaching platforms that instantly customize lessons for each student. Proponents argue that developing such systems would help close achievement gaps in schools by delivering relevant, individualized instruction to children faster and more efficiently than human teachers ever could.

In pursuit of such ideals, tech companies and philanthropists over the years have urged schools to purchase a laptop for each child, championed video tutorial platforms and financed learning apps that customize students’ lessons. Some online math and literacy interventions have reported positive effects. But many education technology efforts have not proved to significantly close academic achievement gaps or improve student results like high school graduation rates.

Now the spread of generative A.I. tools like ChatGPT, which can give answers to biology questions and manufacture human-sounding book reports, is renewing enthusiasm for automated instruction — even as critics warn that there is not yet evidence to support the notion that tutoring bots will transform education for the better.

Online learning platforms like Khan Academy and Duolingo have introduced A.I. chatbot tutors based on GPT-4. That is a large language model, developed by OpenAI, which is trained on huge databases of texts and can generate answers in response to user prompts.

The White House seems sold. In a recent executive order on artificial intelligence, President Biden directed the government to “shape A.I.’s potential to transform education by creating resources to support educators deploying A.I.-enabled educational tools, such as personalized tutoring in schools,” according to a White House fact sheet.

Even so, some education researchers say schools should be wary of the hype around A.I.-assisted instruction.

For one thing, they point out, A.I. chatbots liberally make stuff up and could feed students false information. Making the A.I. tools a mainstay of education could elevate unreliable sources as classroom authorities. Critics also say A.I. systems can be biased and are often opaque, preventing teachers and students from understanding exactly how chatbots devise their answers.

In fact, generative A.I. tools may turn out to have harmful or “degenerative” effects on student learning, said Ben Williamson, a chancellor’s fellow at the Centre for Research in Digital Education at the University of Edinburgh.

“There’s a rush to proclaim the authority and the usefulness of these kinds of chatbot interfaces and the underlying language models that power them,” Dr. Williamson said. “But the evidence that A.I. chatbots can deliver those effects does not yet exist.”

Another concern: The hype over unproven A.I. chatbot tutors could detract from more traditional, human-centered interventions — like universal access to preschool — that have proved to increase student graduation rates and college attendance.

There are also issues of privacy and intellectual property. Many large language models are trained on vast databases of texts that have been scraped from the internet, without compensating creators. That could be a problem for unionized teachers concerned about fair labor compensation. (The New York Times recently sued OpenAI and Microsoft over this issue.)

There are also concerns that some A.I. companies may use the materials that educators input, or the comments that students make, for their own business purposes, such as improving their chatbots.

Randi Weingarten, president of the American Federation of Teachers, which has more than 1.7 million members, said her union was working with Congress on regulation to help ensure that A.I. tools were fair and safe.

“Educators use education technology every day, and they want more say over how the tech is deployed in classrooms,” Ms. Weingarten said. “The goal here is to promote the potential of A.I. and guard against the serious risks.”

This is hardly the first time that education reformers have championed automated teaching tools. In the 1960s, proponents predicted that mechanical and electronic devices called “teaching machines” — which were programmed to ask students questions on topics like spelling or math — would revolutionize education.

 

Popular Mechanics captured the zeitgeist in an article in October 1961 headlined: “Will Robots Teach Your Children?” It described “a rash of experimental machine teaching” sweeping schools across the United States in which students worked independently, inputting answers into the devices at their own pace.

The article also warned that the newfangled machines raised some “profound” questions for educators and children. Would the teacher, the article asked, become “simply a glorified babysitter”? And: “What does machine teaching do to critical thinking on the part of the students?”

Cumbersome and didactic, the teaching machines turned out to be a short-term classroom sensation, both overhyped and over-feared. The rollout of new A.I. teaching bots has followed a similar narrative of potential education transformation and harm.

Text with these photos in 1962 described teaching machines as a “mushrooming field.” Among the examples: classroom devices that used tape recorders for language training. Credit…The New York Times.

 

Unlike the old 20th-century teaching machines, however, A.I. chatbots seem improvisational. They generate instant responses to individual students in conversational language. That means they can be fun, compelling and engaging.

Some enthusiasts envision A.I. tutoring bots becoming study buddies that students could quietly consult without embarrassment. If schools broadly adopted such tools, they could deeply alter how children learn.

That has inspired some former Big Tech executives to move into education. Jerome Pesenti, a former vice president of A.I. at Meta, recently founded a tutoring service called Sizzle A.I. The app’s A.I. chatbot uses a multiple-choice format to help students solve math and science questions.

And Jared Grusd, a former chief strategy officer at social media company Snap, co-founded a writing start-up called Ethiqly. The app’s A.I. chatbot can help students organize and structure essays as well as give them feedback on their writing.

Mr. Khan is one of the most visible proponents of tutoring bots. Khan Academy introduced an A.I. chatbot named Khanmigo last year specifically for school use. It is designed to help students think through problems in math and other subjects — not do their schoolwork for them.

The system also stores conversations that students have with Khanmigo so that teachers may review them. And the site clearly warns users: “Khanmigo makes mistakes sometimes.” Schools in Indiana, New Jersey and other states are now pilot-testing the chatbot tutor.

Mr. Khan’s vision for tutoring bots can be traced back in part to popular science fiction books like “The Diamond Age,” a cyberpunk novel by Neal Stephenson. In that novel, an imaginary tablet-like device is able to teach a young orphan exactly what she needs to know at exactly the right moment — in part because it can instantly analyze her voice, facial expression and surroundings.

Mr. Khan predicted that within five years or so, tutoring bots like Khanmigo would be able to do something similar, with privacy and safety guardrails in place.

“The A.I. is just going to be able to look at the student’s facial expression and say: ‘Hey, I think you’re a little distracted right now. Let’s get focused on this,’” Mr. Khan said.

 

This blog post has been shared by permission from the author.
Readers wishing to comment on the content are encouraged to do so via the link to the original post.
Find the original post here:

The views expressed by the blogger are not necessarily those of NEPC.

Natasha Singer

Natasha Singer is a reporter at The New York Times where she covers the intersection of business, technology and society with a particular focus on education and ...
,

Larry Cuban

Larry Cuban is a former high school social studies teacher (14 years), district superintendent (7 years) and university professor (20 years). He has published op-...