Skip to main content

Mystery AI Hype Theater 3000: The Newsletter: ChatGPT Has No Place in the Classroom

"We can and should resist." written in white font as if by chalk against a dark green backround.

On November 20, 2024, OpenAI and an outfit called "Common Sense Media" released a guide to using ChatGPT in K-12 education—a guide which shows a shocking lack of common sense. Just as Melanie Dusseau tells us in the context of higher ed, K-12 educators can and should resist this sales pitch. ChatGPT won't improve your teaching, won't save you time (anymore than not doing your job would save you time), and doesn't represent a key skill set that your students must have, lest they be left behind.

K-12 educators can and should resist the sales pitch. And that's extra work, I know, and I'm sorry—K-12 educators already have more than enough on their collective and individual plates. Even the effort it takes to turn away from glitzy sales pitches seems like a lot to ask, much less the effort it takes to resist when administrators buy in and start pushing these systems. Let's add this to the pile of harms that OpenAI has perpetrated on the world, even as they claim (here too! think of the children!) to be acting for the good of humanity.

If you're exhausted by all of this you can stop here, with the affirmation that it's not only reasonable but in fact principled and beneficial to say no to ChatGPT and all other GenAI in the classroom.

But I'll keep going for a bit in this post, because the guide itself is a rich text as a hype artifact and it may be useful to apply the usual MAIHT3k lens to it. Here is a brief overview of some of the hype. It's incomplete, because I have things to do (my own class to teach, among others), but it's a start:

A graphic from the Common Sense Media/OpenAI Guide
  1. The guide purports to tell teachers how to use something in their classes, and it even uses some terminology adopted from pedagogy (the guide has learning outcomes stated!). But who are they to tell educators how to teach? What studies have been done on the effectiveness (or not) of the ideas they are proposing? The guide has exactly zero references. There are no studies. Who the hell are they to tell you to rethink your pedagogy?

A second graphic from the Common Sense Media/OpenAI Guide
  1. The guide starts with a section presented as a primer on AI. But it's worse than useless. Here's the definition they give of "AI":

Artificial Intelligence (AI) is a technology that allows computers to do things that have historically required human intelligence. It's like giving a computer the ability to learn from experience and make decisions based on that learning. AI helps people by learning from lots of information and figuring out how to answer questions or perform specific tasks.

"AI", in fact does not refer to a coherent set of technologies, let alone "a" technology. None of the technologies that get called "AI" are the kinds of things that can have experiences. What gets called "learning" in this context is nothing more or less than building statistical representations based on datasets. The systems don't figure anything out. Anthropomorphizing language is always a red flag for AI hype.

Their answer to the third question (What makes ChatGPT different?) is also worse than useless. They say that generative AI (including ChatGPT) is different because it can create content. What it actually does is extrude synthetic text that mimics the form and style of something a person might write, but without any accountability for the actual content of that text (what it means). Oh, and the text isn't fully random but will reproduce biases in the training data. Training data which OpenAI still is not open about.

  1. Their use cases are absurd. There's a whole worked example about prompting ChatGPT to create an agenda because the poor overworked teacher they are imagining has to go to a department meeting that is always unproductive because there's no agenda. Would you run a meeting based on an agenda from some random stranger? Something you found in a book somewhere? This one is off the rails before we even get to the synthetic text aspect of it.
     

  2. As Lance Warwick, quoted in the Tech Crunch article about this product points out, the guide is internally inconsistent. In the section on responsible use they talk about privacy and not inputting student data, but just a bit earlier the guide has a sample prompt that includes "Math Assessment Scores [include class assessment data]".
     

  3. Buried way at the end of the course is the usual disclaimer that ChatGPT output can be inaccurate. They write:

As a responsible user, it is essential that you check and evaluate the accuracy of the outputs of any generative AI tool before you share it with your colleagues, parents and caregivers, and students. That includes any seemingly factual information, links, references, and citations.

The cognitive dissonance here never ceases to amaze me. The people creating this guide (and the tech it's based on) must surely know that their product is designed to make shit up. This is why they say what differentiates ChatGPT is that it can create "content" ... but don't say anything about the value of that content. And yet, and yet: they would have us believe that it is somehow worth educators’ time to go wading through the synthetic text extruded from this thing to check every last bit and make sure it's accurate. Again we can tell there are no actual, honest user studies underlying these recommendations.


There's plenty more that's abhorrent and hypetastic in this guide. As we say on the podcast, it's a rich text, in the way that manure is rich. I'll end with just one more thought.

In the section on responsible use, the guide says: "While AI bias may be hard to detect, you can reduce its occurrence and impacts by critically thinking about when and how to use generative AI, by using the product's reporting function when you encounter objectionable content, and by adopting best practices for prompting."

As someone who has spent a lot of time critically thinking about "AI", I can confidently say that the answer to when to use generative AI to produce anything that will be put in front of students is: never.

If we value the environment and the ability of our students to grow and thrive on a healthy planet, we shouldn't use environmentally ruinous tech.

If we value inclusivity, both in terms of making sure all students feel welcome in the classroom and in terms of all students learning to see each other as fully human, we shouldn't use software known to amplify biases, trained on unfathomably large, undocumented datasets, and built by companies who view such biases as bugs to maybe fix someday after shipping the software.

If we value information literacy and cultivating in students the ability to think critically about information sources and how they relate to each other, we shouldn't use systems that not only rupture the relationship between reader and information source, but also present a worldview where there are simple, authoritative answers to questions, and all we have to do is to just ask ChatGPT for them.

And finally, as we learn from DAIR fellow and former educator Adrienne Williams in Episode 45 of Mystery AI Hype Theater 3000: If we value education, educators, and students, we shouldn't look to technologists (and especially not techo-solutionists) to frame and solve problems. And we certainly shouldn’t redirect resources away from teachers to tech giants.

 

This blog post has been shared by permission from the author.
Readers wishing to comment on the content are encouraged to do so via the link to the original post.
Find the original post here:

The views expressed by the blogger are not necessarily those of NEPC.

Emily M. Bender

Emily M. Bender is a Professor of Linguistics and an Adjunct Professor in the School of Computer Science and the Information School at the University of Washingto...