INDIGNITY VOL. 3, NO. 86: Stop asking the chatbots to do your work!
PUBLIC SERVICE ANNOUNCEMENT DEP'T.
ChatGPT Is Stupid and If You Use It for Work, You're Stupid Too
LAST WEEK, A lawyer got in trouble for submitting a brief in which none of the cases he cited actually existed. He had asked ChatGPT to research the relevant citations for him, and ChatGPT came back with an assortment of things that had names that looked like existing cases ("Varghese v. China Southern Airlines") but were made up, and—after asking ChatGPT if the results were fake, and being assured by ChatGPT that they were real—he submitted those fake results to court.
Everyone is talking about how ChatGPT and other large language models are going to make your job easier, or they're going to do your work so well they will take away your job. Here's how ChatGPT is set up to take away your job, right now, in 2023: it's going to get you fired, because you tried to make your job easier with it and you turned in work that's full of errors and lies.
Stop using ChatGPT for your work! It does not do any of the things you think it does! It is not a search engine. It is not a research device. It does not clarify or simplify complex ideas for you. It just generates nonsense in a format that looks sensible.
The Machines tried to warn you about this, but it's time for a direct person-to-person intervention. Stop it. Put the thing away. It's no good. It's stupid, and if you delegate your mental work to it, you are stupid too.
I'm not arguing here about the ineffable soul of human inspiration, and how it's missing from machine-generated text. Personally, I think that's true, but ineffable humanity is also missing from a lot of human-generated text, and but all of that is a different, more rarefied discussion that would only matter if ChatGPT were generating useful and valuable, yet soulless, results. ChatGPT is generating worthless garbage!
Stop fooling around with it, and if your bosses tell you to fool around with it, tell your bosses to go soak their heads. Tell your bosses, specifically, that you don't want to get sued because the company acted on fake information that your bosses got you to extract from ChatGPT.
ChatGPT is not a search engine, because it is not searching for information. It's just pumping streams of words out of a slurry of liquified material that formerly used to be information. This is never going to be a replacement for human work—again, not because it isn't human, but because it isn't doing the work. ChatGPT doesn't research legal precedents. It doesn't know how or where to research legal precedents, or what legal precedents are, or what "law" is. All it does is emulate the sets of words that people come up with when they do those things.
When one of these systems answers a question with a piece of factually unreal material, people have taken to describing the result as a "hallucination" by the machine (or, for the specific problem of a nonexistent reference, they're trying out the word "hallucitation"). But the so-called hallucinations of an artificial intelligence are not perceptual glitches in a mind that is otherwise busy apprehending objective reality. Everything is a hallucination; it's just that some of the hallucinations happen to match reality and others don't.
Stealing a page from Max "Read Max" Read, who wrote about the fake legal brief faster than I did, I logged into ChatGPT specifically to ask it about this:
Indignity: What are some well-known articles about false citations or hallucinations produced by AI models?
ChatGPT: Here are a few examples of well-known articles that have covered the topic of false citations or hallucinations produced by AI models:
"AI Language Model GPT-3 Can Be Tricked Into Spouting Hate Speech and Conspiracy Theories" - This article, published by The Guardian in 2020, highlighted how AI language models like GPT-3 could generate false information and inappropriate content.
"The Dark Side of OpenAI's GPT-3 Language Model" - An article published by Forbes in 2020, which discussed the potential risks and limitations of AI language models, including their tendency to generate inaccurate or biased content.
"How Language Models Can be Fooled into Generating Misinformation" - This article, featured in MIT Technology Review in 2020, explored the challenges posed by AI language models and their susceptibility to generating false information or hallucinations.
"AI Language Models and the Problem of Misinformation" - An article published by The New York Times in 2020, which delved into the issue of AI-generated misinformation and the difficulties of controlling the content produced by these models.
According to a quick set of Google searches, none of these articles appear to exist under the given headlines at the given publications. If I'd actually been trying to find published examples this way, I'd have been completely wasting my time.
Looping an error-making machine into everyone's workflow is not an improvement for anyone. The ChatGPT frenzy right now is as if employers were scrambling to announce they're replacing their 401(k) plans with a portfolio of investment opportunities they've received from exiled princes over email.
I asked a very smart friend who's been working and playing with ChatGPT for a long time what he gets out of it. He told me that he uses it to write code in Python, a programming language that he vaguely knows but is personally incompetent at writing in. He has enough ability with Python to check the code himself and make sure it works. The results are mediocre, he said, but better than he could do unaided.
But this is a very specific and narrow use case. Generating a simple task-based bit of code gives the AI less room to wander than asking it to summarize precedents on airline liability does. And the user, through trial and error, figured out that he could get helpful results on this one particular task, out of all the tasks he does. If he knew more Python, he could write better code himself, and if he knew less Python, he couldn't be sure the code worked.
The ability to check the work, and the necessity of checking it, is where the whole concept of ChatGPT as a labor-saving revolution falls apart. The fake legal brief, disastrous as it might have looked to the lawyer who filed it, was a better-case scenario for ChatGPT output. It was entirely wrong, and because it was part of an adversarial proceeding, it got thoroughly reviewed. The mess and the wasted effort were contained.
But suppose ChatGPT were retrained to produce less than completely fictitious legal citations. Whoever was reviewing it would have to put in even more effort and attention, to make sure no errors got through—at 50 percent accuracy, the checking would be tedious semi-busywork; at 95 percent accuracy, the checking would be harrowingly boring but also stressful. What if ChatGPT began reliably returning the correct names of actual cases, but occasionally got the meaning of those cases completely backwards?
At each stage of the AI's theoretical improvement, the human side of the labor will also have to intensify. Eventually, at best, it will become indistinguishable from just doing the work.
WEATHER REVIEWS
New York City, May 29, 2023
★★★★★ The smell of someone's cigarette came in strongly enough to break into sleep. The warmth was a little excessive, appropriately summery, and it was time to find the shorts tucked away behind the corduroys. The patch of sun on the balcony was big enough to prop up unprotected legs in for a 10 minute attempt to revive their dormant tanning ability. People streamed toward Central Park. A kettle grill was smoking on the lawn by the pool. A man and girl walked along discussing the risk of ticks. Passing dogs looked covetously at a tennis ball the 11-year-old was using to practice his dribbling handle. Grackles iridesced in the sun on the bathing rock, and a pigeon dropped in among the woodland-ier birds there. The water in the Pool was so low that a piece of the mud was turning into a peninsula, with grass sprouting and children venturing out onto it. A flat scrap of ash lifted off from another newly lit grill and sailed slowly away, 10 or 15 feet above the ground. Five children squirmed inside a single, sagging hammock.
EASY LISTENING DEP’T.
SANDWICH RECIPES DEP’T.
WE PRESENT INSTRUCTIONS for the assembly of sandwiches from One Thousand Favorite Recipes, by Seattle, Washington’s Congregation Temple de Hirsch, Ladies' Auxiliary, compiled by Mrs. Sigismund Aronson and Mrs. William Gottstein, published in 1908, found in the public domain and available at archive.org for the delectation of all.
CHEESE AND SHERRY FOR SANDWICHES. Half pound mild American cream cheese, put through ricer; add butter size of a walnut, creaming butter first; paprika, little salt, and moisten all with sherry wine to the consistency of paste. —MRS. PAUL BERKMAN.
DEVILED EGG SANDWICHES. Mash yolks of boiled eggs to a powder and moisten with melted butter and lemon juice. Work to a paste; add salt, pepper, French mustard to taste, a little Panyan sauce; then add the whites of the eggs, chopped very finely, and a few pimolas chopped finely. Spread between slices of bread, graham preferred. —MRS. S. ARONSON.
EGG SANDWICH. Take two hard boiled eggs, mashing to a paste the yolks only. Mix with butter, onion juice, salt, pepper and a little mayonnaise. Spread on round pieces of bread; put chopped whites of eggs on the outside. —MRS. E. MICHAEL, Spokane.
If you decide to prepare and attempt to enjoy a sandwich inspired by this offering, kindly send a picture to us at indignity@indignity.net.
MARKETING DEP'T.
Thanks for reading INDIGNITY, a general-interest publication for a discerning and self-selected audience. We depend on your support!