On Monday afternoon, OpenAI founder and CEO Sam Altman took to Twitter to announce...well, something, anyway, about the direction of the ChatGPT product:
We made ChatGPT pretty restrictive to make sure we were being careful with mental health issues. We realize this made it less useful/enjoyable to many users who had no mental health problems, but given the seriousness of the issue we wanted to get this right.
Now that we have been able to mitigate the serious mental health issues and have new tools, we are going to be able to safely relax the restrictions in most cases.
In a few weeks, we plan to put out a new version of ChatGPT that allows people to have a personality that behaves more like what people liked about 4o (we hope it will be better!). If you want your ChatGPT to respond in a very human-like way, or use a ton of emoji, or act like a friend, ChatGPT should do it (but only if you want it, not because we are usage-maxxing).
In December, as we roll out age-gating more fully and as part of our "treat adult users like adults" principle, we will allow even more, like erotica for verified adults.
"We have been able to mitigate the serious mental health issues" is a pretty remarkable claim to wrap in such confident, this-has-already-happened type phrasing, I think! Lest anyone forget, not so very long ago ChatGPT ushered a teenager to his suicide; reported stories of AI chatbot users spiraling into delusion, psychosis, and self-harm still form a regular and alarming drumbeat; there is an entire utterly fucking deranged—but extremely active!—subreddit whose users believe (or claim to believe) that they are in romantic relationships with AI "boyfriends." On what basis has OpenAI "been able to mitigate the serious mental health issues" that, it now seems fair to say, are endemic to its product? How long would the company's product have to go without killing any more children before Altman could credibly stand up that claim?
This makes for a handy illustration of the contorted standards that AI proponents want applied to their products. OpenAI wants users to engage with ChatGPT as though it is an intelligence, one you can trust for help with studying or supportive conversation or drafting a cover letter. But no human who had eagerly and repeatedly supplied a suicidal teenager with noose-tying tips and complimented their "clarity of purpose" could credibly offer Well I've mitigated that issue during their pitch for a school guidance counselor job six months later. No human writer could offer Well there is no other way for me to work as justification for stealing the collected literary works of humankind.
The new ChatGPT, Altman says, will be able to "act like a friend" if you want it to. I'm not sure Altman understands this—which might be the single most damning thing in a considerable list of damning things you can say about the man!—but seeking friendship from a predictive text generator is, absolutely by definition, a serious mental health issue. This is like an auto company saying Now that we have mitigated our car's tendency to explode and kill its occupants, we are introducing Unpredictable Firebomb Mode. It is like a gambling app company saying Now that we have mitigated our app's addictiveness, we're rolling out a new version that allows you to make 200 bets per minute.
The punchline of Altman's post comes, appropriately, at the end. In December, he boasts, ChatGPT will begin offering "erotica for verified adults." In search of profitability—or really any actual use case beyond whatever low thrill you might get upon receiving unctuous self-help affirmations from the Uncanny Valley—a product whose proponents and hype-men continually pose it as a civilizational leap forward arrives instead at the internet's dull moneymaking bedrock: porn. You take your chances with Robo-Literotica, man! God only knows which of its characters' various orifices and sex organs might have teeth.