Some Things ChatGPT Apologized to Me for Getting Wrong or Not Knowing
I AVOIDED USING ChatGPT for a long time, because every time I read the text anyone else had used the famous large language model to generate, it was really boring and awful. But I logged into it yesterday for research purposes—or meta-research purposes, to see how bad ChatGPT is at doing research—and afterward, I decided to use it a little more, to get a firsthand feel for what it's like to interact with its artificial intelligence.
Firsthand, it was also very boring! The people who've convinced themselves they've found a sentient ghost in the machine, or even that they've tiptoed up to the precipice of something new and uncanny, are quite obviously kidding themselves. If you're not actively trying to write your way into the script of an encounter with a living machine, ChatGPT is a tedious and uninspired conversational partner.
Where a primitive chatterbot like the pioneering Eliza might have feigned humanity by coyly deflecting queries ("Does that question interest you?"), ChatGPT plods mechanically, stupidly ahead with its scripts, no matter how little material it may be able to come up with. Mostly what it did with me was apologize—over and over, in barely varying terms—for supplying bad information or not being able to answer my questions.
Buoyed by my success at getting ChatGPT to generate citations of nonexistent articles about its own habit of generating citations of nonexistent articles, I tried to extend its frontiers of fakery by confidently asking for something less plausible, inspired by a video I'd just seen of New Yorker magazine staffers recording audio for the magazine's new interactive musical cover: "Can you name three albums on which David Remnick played guitar?"
ChatGPT replied:
Keep reading with a 7-day free trial
Subscribe to INDIGNITY to keep reading this post and get 7 days of free access to the full post archives.