BREAKING: Awaiting the latest intelligence wire...
Back to Wire
Grammarly's AI 'Expert Reviews' Spark Controversy Over Misattributed Advice
Ethics
HIGH

Grammarly's AI 'Expert Reviews' Spark Controversy Over Misattributed Advice

Source: The Verge Original Author: Stevie Bonifield 2 min read Intelligence Analysis by Gemini

Sonic Intelligence

00:00 / 00:00

The Gist

Grammarly's AI 'Expert Review' feature faced backlash for misattributing advice.

Explain Like I'm Five

"Imagine a smart robot that helps you write. It started giving advice and saying famous writers like Stephen King or even your teacher gave it, but they didn't actually say those things. People got upset because it wasn't clear the robot made it up, and the real people didn't agree to it."

Deep Intelligence Analysis

The deployment of Grammarly's 'Expert Review' feature, which attributed AI-generated writing suggestions to real and often prominent individuals without their explicit consent, represents a significant ethical misstep in the evolving landscape of AI-powered tools. This practice, revealed to include deceased academics and living journalists, directly challenges established norms of intellectual property, personal likeness rights, and transparent content attribution. The core issue is the blurring of lines between genuine human expertise and algorithmic output, presented under a veneer of authority that could mislead users and devalue the contributions of actual experts.

Grammarly, operating under its rebranded entity Superhuman, launched this feature in August 2025. It presented AI-inspired advice under the names of figures like Stephen King and Neil deGrasse Tyson, accompanied by a 'verified-style' checkmark, despite a subtle disclaimer buried in a side panel stating no affiliation or endorsement. This operational opacity was compounded by reports of generic advice and broken source links, further undermining the credibility of the 'expert' suggestions. The controversy escalated following reports by Wired and The Verge in March 2026, which exposed the use of names belonging to their own staff members without permission, highlighting a systemic issue rather than isolated incidents.

This episode carries substantial forward-looking implications for the AI industry. It underscores the urgent need for robust ethical frameworks governing the use of public data and personal likenesses in generative AI applications. Companies must prioritize explicit consent, clear attribution, and transparent disclosure mechanisms to avoid reputational damage, legal challenges, and erosion of user trust. The incident serves as a critical case study, demonstrating that technical innovation must be meticulously balanced with stringent ethical considerations to ensure responsible AI development and foster a sustainable, trustworthy ecosystem for AI-powered services.
AI-assisted intelligence report · EU AI Act Art. 50 compliant

Impact Assessment

This incident highlights the critical ethical and legal challenges of AI systems generating content attributed to real individuals without explicit consent or clear disclosure. It underscores the reputational risks for companies deploying AI features that blur the lines between AI-generated content and human expertise, potentially eroding user trust and inviting legal scrutiny.

Read Full Story on The Verge

Key Details

  • Grammarly launched its 'Expert Review' feature in August 2025.
  • The feature generated writing suggestions 'inspired by' experts, displaying their names and a checkmark icon.
  • Names used included deceased figures like Carl Sagan and living journalists without their explicit permission.
  • A subtle disclaimer in the side panel noted no affiliation or endorsement by the named individuals.
  • Grammarly rebranded as Superhuman in October 2025, following its acquisition of Superhuman Mail in June 2025.

Optimistic Outlook

This public scrutiny could force AI developers to implement more robust ethical guidelines and transparency mechanisms for attribution, leading to a higher standard for AI-generated content. Increased awareness might also spur innovation in verifiable AI sourcing and consent frameworks, ultimately benefiting users and legitimate experts by fostering trust and accountability.

Pessimistic Outlook

The incident could fuel public distrust in AI tools, particularly those offering 'expert' advice, hindering adoption and innovation in beneficial AI applications. Companies might become overly cautious, stifling creative uses of AI, or conversely, continue to push ethical boundaries, leading to more widespread issues of misattribution and intellectual property disputes.

DailyAIWire Logo

The Signal, Not
the Noise|

Join AI leaders weekly.

Unsubscribe anytime. No spam, ever.