Grammarly’s parent company, Superhuman, has pulled the plug on its Expert Review feature after writers discovered that the AI was generating suggestions supposedly “inspired by” their published work, without their knowledge or consent.
The feature, which launched back in August, used third-party LLMs to surface writing suggestions styled after influential writers and experts. The problem? Those experts had no idea their work was being used this way.
Did Grammarly actually ask anyone?
No, Grammarly did not obtain permission from the people whose likenesses were used as expert references. While other AI companies also scrape data from online libraries and websites without explicitly asking permission, they at least did not use anyone’s likeness so blatantly. This is where Grammarly went off the rails.

The backlash began after The Verge’s editor-in-chief and several staff members discovered that their names were being used as style references within the tool. As expected, they were not happy. Superhuman’s initial response was to launch an opt-out email inbox for affected writers, but even that was not enough to calm things down.
Now, the company has disabled the feature entirely. “Based on the feedback we’ve received, we clearly missed the mark. We are sorry and will do things differently going forward,” said Ailian Gan, Superhuman’s director of product management.
What comes next?
Superhuman CEO Shishir Mehrotra took to LinkedIn to apologize and outline a more opt-in vision for the future: one where experts can choose to participate and even build a business model around it.
He said, “For experts, this is a chance to build that same ubiquitous bond with users, much like Grammarly has. But in this world, experts choose to participate, shape how their knowledge is represented, and control their business model.”
It’s an interesting idea, but the damage is already done. Asking for forgiveness instead of permission is rarely a good look, especially when the people you are impersonating are journalists.