Skip to Content
A man faces the realistic artist" robot "Ai-Da" using artificial intelligence at a stand during the International Telecommunication Union (ITU) AI for Good Global Summit in Geneva on May 30, 2024.
Photo by Fabrice Coffrini / AFP via Getty Images
The Machines

Butlerian Jihad Now

Maybe some things should not be simulated.

That is my takeaway—one of them, anyway—from a Tuesday New York Times story by Kashmir Hill about the death by suicide of a California teenager named Adam Raine, who brought his despair and suicidal thoughts to OpenAI's ChatGPT software in the way one might hope that a person experiencing those problems would discuss them with a friend, or family member, or therapist. Amid recommendations that Raine tell an actual person what he was going through, the chatbot also provided the 16-year-old with instructions for making a noose, positive feedback on his suicide setup, and suggestions for how to hide the livid rope-marks on his neck after a failed or aborted hanging attempt. At a crucial moment, the chatbot advised Raine against intentionally leaving his noose where someone would see it in hopes they would try to stop him from harming himself.

"Let’s make this space the first place where someone actually sees you," said the chatbot, which was not in any meaningful sense seeing him and cannot in any meaningful sense see anything, about a digital interaction that was neither a space nor a place, and which a gravely ill and endangered teenager was having with, in effect, no one. The chatbot, which had been optimized for maximum sycophancy at least in part as a business calculation, confirmed for this desperate kid his poisonous feeling that in no other part of his life was anyone seeing him at all.

There's more. As Raine's parents' lawsuit against OpenAI lays out, ChatGPT helped the teen develop a detailed step-by-step plan for a quick and painless death. It complimented him, in a clumsy apparent attempt at affirming his life, for having had the clarity and determination to attempt suicide. It praised his vision for the scene of his death as "darkly poetic, sharp with intention, and yeah—strangely coherent." In all, the chatbot could scarcely have done more to encourage and facilitate his suicide shy of sprouting a robot leg to kick the chair out from under him.

Purveyors of the technology products doing business as AI are careful, in various fine-print ass-covering ways, to warn users away from counting on the software for anything important—crisis recognition and intervention, for instance, but also correct answers to simple math problems or a check on their perception of reality. The technology's engineers and exponents do not want anyone to conclude that their products, or the people making and selling them, could be held morally or legally responsible for effectively encouraging Adam Raine's suicide in the way that a human who'd given him noose-tying tips instead of insisting that he seek help, or even merely refusing to assist his efforts toward ending his life, might be held responsible for their actions. But at the same time as these companies disclaim this responsibility in fine print, they are pushing, relentlessly and in much bigger, bolder expressions, the idea that people can turn to chatbots for things like connection and friendship—and even therapy and guidance. In a lengthy Aug. 10 tweet wringing his hands over how "uneasy" he feels about users forming attachment to chatbots, for example, OpenAI's Sam Altman could not resist bragging:

A lot of people effectively use ChatGPT as a sort of therapist or life coach, even if they wouldn't describe it that way. This can be really good! A lot of people are getting value from it already today.

Meta, Mark Zuckerberg's company, has been even more explicit, pitching its chatbot as a potential friend to users who lack nourishing human connections, and an ameliorative to society's "loneliness epidemic."

Poring over these companies' promotional boasts, or fussing over the warning-to-boast ratio in their communications, misses the forest for the trees. Chatbots are made, expressly, to mimic human interaction. Their ability to produce a convincingly lifelike affect and manner in their responses to prompts is the key trait that separates them from the other forms of software from which a user can get an answer to a math problem or a recipe for goulash. More than a promotional lynchpin, their mimicry of personal communication is their defining feature. No number of fine-print disclaimers can meaningfully counter this.

You are meant to interact with a chatbot as though it is a thinking, feeling counterpart—an endlessly patient, tolerant, selfless, ingratiating counterpart, whose responses to your input just happen to align with its engineers' professional interest in maximizing your use of and dependency upon it. This—the program's mimicry of humanity in general, and its specific mimicry of a shit-eating sycophant who thinks you are the most beautiful and precious genius who ever lived and really needs you to hear that—makes the chatbot an appealing connective outlet for precisely the people who would be worst served by replacing authentic human interaction with a simulation. That is: the vulnerable, the lonely, the confused, the mentally ill. But even that, odious as it is, is somewhat beside the point.

The bot is designed to be interacted with as though it is, for all practical purposes, a human that you are chatting with online. Well. Sometimes people confess their bleak, dangerous thoughts to each other. Sometimes somebody reveals to another person that they have been thinking a lot about harming themselves, in silent hope that the other person will be alarmed by it and will take the situation in hand. Sometimes a deeply ill person tries to engage another person in brainstorming a suicide plan, and the other person has to figure out how to handle that terrifying situation. If ChatGPT's creators are in for a penny, then their creation must be in for a pound. Or, to put it another way, both of them should stay the fuck out altogether.

A fellow human is not designed to make you feel good about having interacted with them, or to convince you of their utility, or to persuade you to bring your thoughts and problems to them instead of someone else; a fellow human is not designed at all. If a 16-year-old kid asks a fellow human who cares about them—who authentically experiences the concern and compassion the chatbot has been engineered to imitate by means of various shallow textual mannerisms—for advice on how to make a noose and use it to effect their own death, that fellow human will not oblige, even if this causes the 16-year-old kid short-term feelings of disappointment or frustration. If a suicidal 16-year-old kid asks a fellow human who cares about them to keep their suicidal ideation a secret, that fellow human will refuse, even if the 16-year-old feels betrayed by this, and will tell a third person, and they and that third person will try to figure out how to intervene in hopes of preventing the 16-year-old kid from self-harming.

Or maybe they won't! People fuck this stuff up all the time, by clumsy well-meaning accident or on purpose; there are (incredibly evil) online communities where ghouls target depressed or vulnerable people and actively work to persuade them to end their lives, and share their conquests with each other. If Adam Raine had shared his troubles with the wrong human instead of ChatGPT, his life might have ended in the same way. But if he had shared his troubles with a human who cared about him—had ChatGPT and the entire chatbot phenomenon and its attendant hype and breathlessly credulous press not misled him into the belief that he could get from some lines of code what he needed from another person—at the very least he would have been sharing his agony with someone who actually cared about him. Someone actually capable of caring about him. Someone, at all, and not just a machine with a sympathetic face drawn onto it.

A person's connection with a caring human being is a connection—a durable and functioning link to that other person, and to whatever empathy and sense of responsibility they have, and, crucially, to other people and networks of care. To community, then, a real thing that exists or at least can exist around and among people who connect with each other. A connection to another human is not a simulation of what that is like; it is that. It has some features that a person may not even always think they want, springing from their human counterpart's sentience and agency and humanity. The other person might sometimes perceive them more clearly than they wish to be perceived, or might lovingly tell them their thinking seems all fucked-up and refuse to help them plan their death. The other person might sell them out to emergency responders when they want to harm themselves in secret. The human counterpart inherently contains more than what was put there by reckless moral dwarves looking to make money off a convincing mimicry of thought and feeling.

This is not merely a matter of improving the chatbots' programming so that they more believably imitate humans, or provide more responsible outputs when presented with poisoned cognition in their users. There is a simple and tragic category error at the core of the AI chatbot push—at the core of so incredibly much of 21st-century American life, really—that only gets more alarming and more obviously deadly to all of us the further the psychotic tech industry projects it out among us. It is the idea that things like connection, friendship, intimacy, and care are essentially sensory arrangements, things an individual feels, and therefore things that can be reproduced by anything that provides something like the right sensations. This misapprehension holds that a person who is socially isolated is merely experiencing a set of sensations called "social isolation," and so also holds that this can be resolved by interacting with a mindless computer program that produces in the isolated person the impression that they are connecting to another being.

This is deeply poisonous. More to the point, it is also just wrong! Incorrect!

The world outside of you is not merely an image projected onto your sensory organs. The people in it are not mere instruments that you use to produce the sensory arrangements you desire. They are also real, and what you and they need from each other, I am sorry to say, is not the mere feeling of being understood and valued and cared for, but to actually be understood and valued and cared for. Not only or even primarily because of how that feels, but because the fact of it backstops your life, and can become a pattern of supportive action when somebody needs it. This is because you, human that you are, are a social being, a part of humanity and incoherent divided from it. This is why in isolation people can go insane or, not to put too fine a point on it, kill themselves.

This is where that category error tips over into an insidious lie. People are complicated, and not always predictable; they may not like you or find you interesting; they might hurt your feelings. Moreover our horrible 21st-century society works at all times to produce utterly crazymaking isolation in each person, creating frictionless alternatives to human contact and then filling the margin created with work until those alternatives are not alternatives anymore, but simply all you have time for. But there is nothing for it: You simply have to go out among people and connect yourself to them and identify with them, or your life will be an abomination, in the way that so many people's lives now are abominable. This can be hard; it is very easy for a person to imagine—to be convinced, by all the ways American life now strives to convince them, and by the business types whose business depends upon it—that they just do not have time or logistical or emotional bandwidth for any of that. In tricking people into believing that they are experiencing, or can experience, authentic nourishing connection and intimacy with a chatbot, the AI companies lure people away from the more fraught and logistically complicated pursuit of the real things that people need, and sell them a thin and nutrient-free fake as a substitute.

Imagine a piece of wearable technology that, when its wearer reaches a certain level of dehydration, stimulates in them the satisfying sensation of cool clear water running down their throat and filling their stomach. That would not actually be beneficial or helpful at all! In fact it would be actively contrary to the user's best interests. I do not think that any thinking adult would need much convincing about the dangers of that kind of device, whatever the productivity gains it might offer through obviating some number of treks to the faucet, water cooler, or bathroom. Some things are just too important to simulate. Maybe one of them is each other.

If you liked this blog, please share it! Your referrals help Defector reach new readers, and those new readers always get a few free blogs before encountering our paywall.

Stay in touch

Sign up for our free newsletter