Skip to main content

Larry Cuban on School Reform and Classroom Practice: Chatbots Are Forcing Teachers To Confront the Promise and Peril of AI (Molly Roberts)

The degree to which Artificial Intelligence encompassed in chatbots like GPT will change teaching and learning is still unknown although its impact on university professors and K-12 classroom teachers has started to emerge. Keep in mind that ChatGPT was introduced just over a year ago.

The following article discusses ChatGPT’s impact on University of Mississippi professors who teach writing and their students.

“Remember what I told you last week? Forget it.”

This is how Marc Watkins starts many a faculty meeting in the University of Mississippi’s Department of Writing and Rhetoric. The self-fashioned AI guru has been tracking the capabilities of the large language models, such as ChatGPT, that are already transforming how his students write and read — in some cases, by doing both for them.

There is no better place to see the promise and the peril of generative artificial intelligence playing out than in academia. And there’s no better place to see how academia is handling the explosion in ChatGPT and its ilk than at Ole Miss.

The University of Mississippi is dotted with small motorized robots called “Starships” that deliver food and other conveniences across the university’s campus. (Houston Cofield for The Washington Post)

In the spring, after students came back to campus eager to enlist robots in their essay-writing, Watkins and his colleagues created the Mississippi AI Institute (not to be confused with the Mississippi AI School, a Mississippi State University venture focused on the artificial insemination of cattle).

The hope is that the institute’s work can eventually be used by campuses across the country. For now, a two-day program in early June at Ole Miss may be the only one of its kind to pay teachers a stipend to educate themselves on AI: how students are probably using it today, how they could be using it better, and what all of that means for their brains.

The only way to describe what these tools have done to the teaching of writing is to borrow a phrase any professor would mark down as a cliché. They have changed everything.

AI is forcing educators to rethink plagiarism guidelines, grading and even lesson plans. But above all, it is demanding that they decide what education is really about — that teachers ask, in short, “What are we here for, anyway?.

Marc Watkins, an academic innovation fellow and lecturer at the University of Mississippi, has been vigorously researching AI language models like ChatGPT to find ways for his students to use it in a productive way for learning.

ChatGPT has become to generative AI what Kleenex is to tissues. This most mentioned of tools, however, might be the least of teachers’ worries. Boutique services geared toward composing college essays, the very task Watkins and his colleagues are trying to teach, abound.

Some of their names jangle with techno-jargon, while others strive for the poetic, or at least the academic: Wordtune, Elicit, Fermat.

“Help me write,” read the words atop a Google doc equipped with its AI assistant tool, presumably in the voice of whoever is staring at a blank document waiting for words to come. Watkins finds this disturbing in its vagueness. Help me how?

Other technologies are more explicit about what they’re providing. Wordtune offers the opportunity to select a “spice” to add to your paper.

The “rewrite” option can polish a sloppy sentence; the “explain” option can elaborate on a vague one. There’s also “make a joke” (groan-inducing at best) and “statistical fact” (somewhat more useful, if you’re not worried about AI’s documented propensity to hallucinate). “Counterargument” can — well, you get the picture.

Do you write ad copy? White papers? Plain old emails — or dissertations? Lex, another tool, wants to know. Answer that you write op-eds, and it informs you that, “with that type of writing, it can be hard to maintain objectivity while presenting a poignant argument, amidst the pressure of constant deadlines.” (Tell me about it.)

Or you can plug in what you’ve got so far and tell the tool to critique it. Dominic Tovar, an Ole Miss freshman pursuing an engineering degree, likes feeding text into the tool and having it tell him what needs fixing: This sentence is incoherent. This paragraph is too wordy. When things get really rough, he can always type “+++,” a command that prompts Lex to generate the next paragraph — but he thinks students should consider that degree of assistance a last resort.

Other services aim narrower.

Perplexity AI “unlocks the power of knowledge with information discovery and sharing.” This, it turns out, means “does research.” Type something into it, and it spits out a comprehensive answer, always sourced and sometimes bulleted. You might say this is just Google on steroids — but really, it is Google with a bibliography.

Caleb Jackson, a 22-year-old junior at Ole Miss studying part time, is a fan. This way, he doesn’t have to spend hours between night shifts and online classes trawling the internet for sources. Perplexity can find them, and he can get to writing that much sooner.

Speaking of bibliographies, many students have found themselves filled with despair upon realizing they aren’t actually finished with a paper until they have compiled several pages of APA-style citations complete with annotations. No more! Now, a service called Sutori will handle the pesky copy-pasting and formatting for you.

ChatGPT is sort of in a class of its own, because it can be almost anything its users want itto be so long as they possess one essential skill: prompt engineering. This means, basically, manipulating the machine not only into giving you an answer but also into giving you the kind of answer you’re looking for.

“Write a five-paragraph essay on Virginia Woolf’s ‘To the Lighthouse.’” Too generic? Well, how about “Write a five-paragraph essay on the theme of loss in ‘To the Lighthouse’”? Too high-schoolish? “Add some bigger words, please.” The product might not be ready to turn in the moment it is born, fully formed, from ChatGPT’s head. But with enough tweaking — either by the student or by the machine at the student’s demand — chances are the output can muster at least a passing grade.

Larry Wilson, an Air Force veteran back in school at 43, says ChatGPT and image generators like Dall-E even aid him in creative pursuits. He crafts comic strips and graphic novels. Sometimes, it’s “difficult getting things in your head out.” But with generative AI, he can explain his vision to a system, and it turns that vision into a tangible image or video. If he sketches out a character to the AI, it returns what he calls an “abundance” of actions, utterances, and more that he can insert into the opus of the hour.

Which of these uses are okay? Which aren’t? The harnessing of an AI tool to create an annotated bibliography likely doesn’t rankle even librarians the way relying on that same tool to draft a reflection on Virginia Woolf offends the professor of the modern novel. Why? Because that kind of contemplation goes closer to the heart of what education is really about.

Here’s the bottom line: It’s likely impossible to catch kids using AI to cheat. The detection tools lauded at first as universities’ last bulwark against a horde of scribbling machines have fallen out of favor. They do a poor job identifying cheaters where they do exist — and yet somehow often seem to identify them where they don’t.

See, most notoriously, a professor at Texas A&M University at Commerce who threatened to fail his entire class after using ChatGPT to detect whether it had written their essays. Turns out, it didn’t work.

Or look at Vanderbilt University. The college, in announcing its disabling of one such tool, points out that detectors are more likely to flag material written by nonnative English speakers. Its bulletin notes that other companies that pounced on the demand for detectors in the spring have given up. Anyway, none of them was ever able to explain how they could distinguish man from machine — perhaps because, when it came down to it, they couldn’t.

At the Ole Miss summer institute, faculty members see for themselves. “My mother is a fish,” one professor plugs into a service called Turnitin. This is the famous five-word chapter of “As I Lay Dying” by William Faulkner, son of Oxford, Miss. — an ingenious shift into the consciousness of a young boy. The result? 93 percent AI generated. (Probably because the sentence is suspiciously simple, but it goes to show that these detection tools don’t yet appreciate modernism.)

Of course, if the machines can’t detect other machines, that doesn’t mean humans can’t try to. Unsurprisingly, there’s a bit of a “know it when you see it” phenomenon with AI-written work in classes taught by teachers who’ve seen hundreds if not thousands of papers by human students. The trouble for these teachers is figuring out how to react when they do believe they see it.

Sarah Campbell, presenting at the summer institute, described a student essay that appeared, as she put it, “written by an alien.” Or written in the year 1950. Or perhaps written in 1950 by an alien. She responded by asking the student to coffee, where she told the student that she had obviously let them down: “You didn’t know how desperate I am to hear your voice.”

This practically trademarkable Good Teaching Moment cuts to the core of the question colleges now face. They can’t really stop students from using AI in class. They might not be able to notice students have done so at all, and when they do think they’ve noticed they’ll be acting only on suspicion. But maybe teachers can control the ways in which students use AI in class.

Figuring out exactly what ways those ought to be requires educators to determine what they care about in essays — what they are desperate to hear. The purpose of these papers is for students to demonstrate what they’ve learned, from hard facts to compositional know-how, and for teachers to assess how their pupils are progressing. The answer to what teachers want to get from students in their written work depends on what they want to give to students.

“AI is not meant to avoid opportunities to learn through structured assignments and activities.”

This line comes from the AI policy for Tom Brady’s Ole Miss education class. His students discussed the strengths and weaknesses of the tools (“strong in summarizing, editing and helping to brainstorm ideas”; “poor at creating long segments of text that are both topical and personal”), put those in the context of academic honesty and devised the rules themselves.

That one line sums up the point: AI is not meant to avoid opportunities to learn.

What’s most important to Ole Miss faculty members is that students use these tools with integrity. If the university doesn’t have a campuswide AI honor code, and so far it doesn’t, individual classes should. And no matter whether professors permit all applications of AI, as some teachers have tried, or only the narrowest, students should have to disclose just how much help they had from robots.

The next concern is that students should use AI in a manner that improves not only their writing but also their thinking — in short, in a manner that enhances learning rather than bypasses the need to learn at all.

This simple principle makes for complicated practice. Certainly, no one is going to learn anything by letting AI write an essay in its entirety. What about letting AI brainstorm an idea, on the other hand, or write an outline, or gin up a counter-argument? Lyndsey Cook, a senior at Ole Miss planning a career in nursing, finds the brainstorming especially helpful: She’ll ask ChatGPT or another tool to identify the themes in a piece of literature, and then she’ll go back and look for them herself.

These shortcuts, on the one hand, might interfere with students’ learning to brainstorm, outline or see the other side of things on their own. But — here comes a human-generated counterargument — they may also aid students in surmounting obstacles in their composition that otherwise would have stopped them short. That’s particularly true of kids whose high schools didn’t send them to college already equipped with these capabilities.

Allow AI to boost you over these early hurdles, and suddenly the opportunity for deeper learning — the opportunity to really write — will open up. That’s how Caleb Jackson, the part-time student for whom Perplexity has been such a boon, sees it: His professor, he says , wanted them to “get away from the high-school paper and go further, to write something larger like a thesis.”

Perplexity, Lex and other AI tools showed him what he was doing wrong, so that he could do it right next time. And the tools themselves told him he was improving. One system gave critical feedback on his first paper; on the second, “The AI literally said, ‘That was a great paper to read.’”

Maybe. Or maybe, as one young Ole Miss faculty member put it to me, this risks “losing the value of the struggle.” That, she says, is what she is scared will go away.

All this invites the most important question there is: What is learning for?

The answers are myriad. (ChatGPT, asked, counts exactly 11.) But they break down something like this: Learning, in college, can be instrumental. According to this view, the aim of teaching is to prepare students to live in the real world, so all that really matters is whether they have the chops to field jobs that feed themselves and their families. Perhaps knowing how to use AI to do any given task for you, then, is one of the most valuable skills out there — the same way it pays to be quick with a calculator.

If you accept this line of argument, however, there are still drawbacks to robotic crutches. Some level of critical thinking is necessary to function as an adult, and if AI stymies its development even the instrumental aim of education is thwarted. The same goes for that “value of the struggle.” The real world is full of adversity, much of which the largest language model can’t tell you how to overcome.

But more compelling is the idea, probably shared by most college professors, that learning isn’t only instrumental after all — that it has intrinsic value and that it is the end rather than merely a means to one. Every step along the way that is skipped, the shorter the journey becomes, the less we will take in as we travel.

This glummest of outlooks suggests that AI will stunt personal growth even if it doesn’t harm professional prospects. While that doesn’t mean it’s wise to prohibit every little application of the technology in class, it probably does mean discouraging those most closely related to critical thinking.

One approach is to alter standards for grading, so that the things the machines are worst at are also the things that earn the best marks: originality, say, or depth of feeling, or so-called metacognition — the process of thinking about one’s own thinking or one’s own learning.

Hopefully, these things are also the most valuable because they are what make us human.

Stephen Monroe, chair of the Ole Miss writing and rhetoric department, has a theory. It involves player pianos, those mechanical instruments that send musical notes floating through fancy hotel lobbies without a musician.

The player piano plays perfectly — yet the result is, as he puts it, “hollow and gimmicky.” You’d hardly buy a concert hall ticket to watch one of these machines perform even the most gorgeous or most technically demanding of sonatas. But you’d pay up, don a gown and sit, rapt, “to hear a human being play that very same sonata on that very same piano.”

The beautiful may seem less beautiful when we know that it comes from lines of code or vast arrays of transistors rather than from flesh, blood, heart and soul. Every triumph may seem that much less triumphant.

If you ask the Ole Miss educators, their students know this. If you ask the students, some of them, at least, know it too.

Caleb Jackson only wants AI to help him write his papers — not to write them for him. “If ChatGPT will get you an A, and you yourself might get a C, it’s like, ‘Well, earned that C.’” He pauses. “That might sound crazy.”

Dominic Tovar agrees. Let AI take charge of everything, and, “They’re not so much tools at that point. They’re just replacing you.”

Lyndsey Cook, too, believes that even if these systems could reliably find the answers to the most vexing research problems, “it would take away from research itself” — because scientific inquiry is valuable for its own sake. “To have AI say, ‘Hey, this is the answer …’” she trails off, sounding dispirited.

The kids are even more reluctant to cede the most personal aspects of their writing to AI, even when allowed. Guy Krueger, who teaches Writing 101, put it simply to his class: If you’ve gone on a date, would you ask ChatGPT to describe the date for you? The response was a resounding no. (Well, one kid did say yes.)

This lingering fondness for humanity among humans is reassuring — for now. Whether it will fade over time, however, is far from certain.

Claire Mischker, lecturer of composition and director of the Ole Miss graduate writing center, asked her students at the end of last semester to turn in short reflections on their experience in her class. She received submissions that she was near certain were produced by ChatGPT — “that,” she says as sarcastically as she does mournfully, “felt really good.”

The central theme of the course was empathy.

 

This blog post has been shared by permission from the author.
Readers wishing to comment on the content are encouraged to do so via the link to the original post.
Find the original post here:

The views expressed by the blogger are not necessarily those of NEPC.

Larry Cuban

Larry Cuban is a former high school social studies teacher (14 years), district superintendent (7 years) and university professor (20 years). He has published op-...
,

Molly Roberts

Molly Roberts is an editorial writer covering technology and society for The Washington Post. ...