Grammarly's AI 'Expert Reviews' Spark Controversy Over Misattributed Advice
Sonic Intelligence
The Gist
Grammarly's AI 'Expert Review' feature faced backlash for misattributing advice.
Explain Like I'm Five
"Imagine a smart robot that helps you write. It started giving advice and saying famous writers like Stephen King or even your teacher gave it, but they didn't actually say those things. People got upset because it wasn't clear the robot made it up, and the real people didn't agree to it."
Deep Intelligence Analysis
Grammarly, operating under its rebranded entity Superhuman, launched this feature in August 2025. It presented AI-inspired advice under the names of figures like Stephen King and Neil deGrasse Tyson, accompanied by a 'verified-style' checkmark, despite a subtle disclaimer buried in a side panel stating no affiliation or endorsement. This operational opacity was compounded by reports of generic advice and broken source links, further undermining the credibility of the 'expert' suggestions. The controversy escalated following reports by Wired and The Verge in March 2026, which exposed the use of names belonging to their own staff members without permission, highlighting a systemic issue rather than isolated incidents.
This episode carries substantial forward-looking implications for the AI industry. It underscores the urgent need for robust ethical frameworks governing the use of public data and personal likenesses in generative AI applications. Companies must prioritize explicit consent, clear attribution, and transparent disclosure mechanisms to avoid reputational damage, legal challenges, and erosion of user trust. The incident serves as a critical case study, demonstrating that technical innovation must be meticulously balanced with stringent ethical considerations to ensure responsible AI development and foster a sustainable, trustworthy ecosystem for AI-powered services.
Impact Assessment
This incident highlights the critical ethical and legal challenges of AI systems generating content attributed to real individuals without explicit consent or clear disclosure. It underscores the reputational risks for companies deploying AI features that blur the lines between AI-generated content and human expertise, potentially eroding user trust and inviting legal scrutiny.
Read Full Story on The VergeKey Details
- ● Grammarly launched its 'Expert Review' feature in August 2025.
- ● The feature generated writing suggestions 'inspired by' experts, displaying their names and a checkmark icon.
- ● Names used included deceased figures like Carl Sagan and living journalists without their explicit permission.
- ● A subtle disclaimer in the side panel noted no affiliation or endorsement by the named individuals.
- ● Grammarly rebranded as Superhuman in October 2025, following its acquisition of Superhuman Mail in June 2025.
Optimistic Outlook
This public scrutiny could force AI developers to implement more robust ethical guidelines and transparency mechanisms for attribution, leading to a higher standard for AI-generated content. Increased awareness might also spur innovation in verifiable AI sourcing and consent frameworks, ultimately benefiting users and legitimate experts by fostering trust and accountability.
Pessimistic Outlook
The incident could fuel public distrust in AI tools, particularly those offering 'expert' advice, hindering adoption and innovation in beneficial AI applications. Companies might become overly cautious, stifling creative uses of AI, or conversely, continue to push ethical boundaries, leading to more widespread issues of misattribution and intellectual property disputes.
The Signal, Not
the Noise|
Join AI leaders weekly.
Unsubscribe anytime. No spam, ever.
Generated Related Signals
Suno AI Music Copyright Filters Easily Bypassed, Raising Infringement Concerns
Suno's AI music platform copyright filters are easily circumvented, enabling creation of close imitations.
AI Fakes and Copyright Trolls Target Folk Musician, Exposing Platform Vulnerabilities
A folk musician faced AI-generated song fakes and a copyright troll, revealing major platform protection gaps.
Kaggle Hosts 37,000 AI-Generated Podcasts, Raising Content Integrity Concerns
A Kaggle dataset reveals 37,000 AI-generated podcasts, highlighting emerging content integrity challenges.
SpaceX Explores Orbital Data Centers Amidst $1.75 Trillion Valuation IPO Buzz
SpaceX explores orbital data centers to justify a massive $1.75 trillion valuation.
Japan Pivots to Physical AI for Industrial Survival Amidst Demographic Crisis
Japan deploys physical AI to counter severe labor shortages.
Multi-Agent AI Pipeline Slashes Code Migration Time by 500%
A 6-gate multi-agent AI pipeline dramatically accelerates code migration with structural constraints.