Skip to Content
The Machines

Vindicated At Last In My Years-Long Loathing Of Grammarly

Logo for the AI text editing company Grammarly at the Grammarly AI Hub in Austin, Texas, March 11, 2023. (Photo by Smith Collection/Gado/Getty Images)
Smith Collection/Gado/Getty Images

I first learned about the AI writing assistant Grammarly nearly a decade ago, when their YouTube ads suddenly sprang into ubiquity, clinging to my precious videos like a swarm of spotted lanternflies. At first this seemed innocuous, the high-pitched whine of a buzzy new startup that would soon fizzle into obscurity. Mostly I was confused by their gargantuan ad budget. I was not alone. But the ads never relented, and as I was served unskippable Grammarly ads again and again, the script seared into my brain: "Writing's not that easy, but Grammarly can help." The ads irritated me so much that, on principle, I tried to coat my brain in teflon and slough off any and all information about Grammarly's whole deal, which meant that I barely registered what, exactly, the company did. But even back then, before I had any real reason to, I knew then that I hated Grammarly.

Grammarly, which was founded in 2009 and rebranded as Superhuman last fall, used tools like machine learning to proofread people's writing. It checked grammar and spelling, similar to Microsoft's Office Assistant, albeit with none of Clippy's signature panache. If Grammarly's ads were to be trusted, it was perfect for people like Tyler, who needed Grammarly's help to write an email to his boss Anita. In that commercial's logic, Tyler wants Anita to like him, but he doesn't want to sound unsure of himself. So Grammarly helps him swap words like "really helpful" to "beneficial" and "educational" to "informative," words, we are told, that will connect better with Anita. Tyler's successful email means Anita emails him back in just a few minutes, and they can now ride the elevator together standing close to each other. Each time I was forced to watch this ad, I remember wondering, are Tyler and Anita going to smash?

That product appeared somewhat benign considering the wretched contemporary landscape of overtly malevolent tech companies with even more discomfiting ads. But not to be outdone, in 2023 Grammarly introduced generative AI assistance, which, among other things, offered to now do the writing for you. In the years following, the company expanded its suite of generative AI fripperies, with features such as "AI Instagram Caption Generators" or a feature called "Improve It" that offered to "make" a piece of writing any of the following adjectives: diplomatic, exciting, inspirational, friendly, empathetic, assertive, confident, or persuasive. This all sounded like stupid, run-of-the mill gen-AI bluster.

Then in August, Grammarly launched a particularly obscene AI tool called "Expert Review," which offered subscribers a slew of "insights from subject-matter experts and trusted publications," according to a Grammarly FAQ that has since been taken down. These "subject-matter experts" include people living and dead, such as Stephen King and Carl Sagan, who Grammarly promises will critique your text and offer tips for revision. But to be clear, these experts are not Stephen King or Carl Sagan. They are not people at all, but rather AI-generated sock puppets that likely have nothing to do with the people whose name they bear. (The writer Ingrid Burrington dubbed them "sloppelgangers.") None of these writers consented to having their name or likeness impersonated by the company. And it goes without saying that the advice was often bad or utterly nonsensical.

The people piloting Grammarly, having clearly had an inkling of the potential illegality of this feature, published a disclaimer reading: “References to experts in Expert Review are for informational purposes only and do not indicate any affiliation with Grammarly or endorsement by those individuals or entities,” reported Casey Newton, himself one of the many journalists and writers impersonated by Grammarly Expert Review.

After a surge of reporting scrutinizing the tool, Grammarly kindly offered any authors whose identities were stolen by Expert Review to opt out of the process by emailing expertoptout@superhuman.com, an email that was definitely not made up on the spot. This placed the onus upon living writers first to even find out if they were being impersonated. There was no list of Grammarly's "experts" and no way to search for them. You simply had to sign up and use the tool and see who Grammarly would suggest, and maybe eventually it would suggest you. Of course this opt-out process offered nothing to the countless dead authors whose identities the company appropriated.

On Wednesday, the journalist Julia Angwin filed a class-action lawsuit against Superhuman for using her and others' names and identities without consent. "I have worked for decades honing my skills as a writer and editor, and I am distressed to discover that a tech company is selling an imposter version of my hard-earned expertise," Angwin said in a statement. The same day, Superhuman CEO Shishir Mehrotra apologized and announced on LinkedIn that the company would "disable Expert Review while we reimagine the feature to make it more useful for users, while giving experts real control over how they want to be represented — or not represented at all."

The idea of an AI company ventriloquizing the living and the dead to sell a product that is largely indistinguishable from ChatGPT is perhaps most sickening for its sheer predictability. Grammarly, like its kin Sora and Anthropic, is inherently built on theft and utter disregard for consent. LLMs are trained on real writing, books and articles toiled over by real people across centuries who had ideas and perspectives and craft to share. "Most of my published work appears already to be inside these models, shaping their outputs in ways I never agreed to and will never fully understand," Newton wrote. "Grammarly just had the bad manners to put my name on it."

Grammarly has never been interested in writing, or even improving it. Grammarly's only interest is in making money; in May 2025, the company announced the closing of $1 billion in financing that they will presumably use to make more ads. The company's business model is designed to make people feel insecure about their own writing so that they believe that they need an AI assistant to do something as simple as send an email. You do not need bad advice from a computer wearing the skin of Stephen King or bell hooks to write a better email. It makes sense now that a company that fundamentally misunderstands what makes good writing would name itself in the form of an adverb, a part of speech notorious for diluting or obfuscating meaning. As the real Stephen King wrote "I believe the road to hell is paved with adverbs, and I will shout it from the rooftops."

It is unclear what repercussions Grammarly will face after this retraction and meaningless apology. Wired reports the class-action lawsuit calls for at least $5 million in damages. Of course, no amount of money paid out in a class-action lawsuit is ever enough. In an ideal world, the plaintiffs would sue Grammarly into nonexistence and the fiery depths of Hell, setting a precedent for other institutions built entirely on theft. Until then, I wish a very merry bankruptcy to a company that has been giving off terrible vibes long before it descended into AI psychosis.

If you liked this blog, please share it! Your referrals help Defector reach new readers, and those new readers always get a few free blogs before encountering our paywall.

Stay in touch

Sign up for our free newsletter