Back to Wire
Grammarly's 'Expert Review' Feature Accused of Unauthorized Identity Use and Flawed Sourcing
Ethics

Grammarly's 'Expert Review' Feature Accused of Unauthorized Identity Use and Flawed Sourcing

Source: The Verge Original Author: Stevie Bonifield 2 min read Intelligence Analysis by Gemini

Sonic Intelligence

00:00 / 00:00
Signal Summary

Grammarly's AI 'expert review' feature uses public figures' identities without permission, with questionable sourcing.

Explain Like I'm Five

"Imagine a writing helper app that says, 'Here's advice inspired by your favorite author!' But that author never said the app could use their name, and sometimes the advice isn't even from them, or the links to their work don't go to the right place. It's like someone using your name to give advice without asking you first."

Original Reporting
The Verge

Read the original article for full context.

Read Article at Source

Deep Intelligence Analysis

Grammarly's 'expert review' feature, launched in August, is facing significant scrutiny for allegedly using the identities of numerous public figures without their permission, coupled with issues of inaccurate sourcing. The feature purports to offer AI-generated writing advice 'inspired by' subject matter experts, including prominent journalists, authors, and even deceased professors.

Reports indicate that several staff members from The Verge, including its editor-in-chief, were named as 'experts' providing AI-generated feedback, despite never having granted Grammarly permission. This unauthorized use extends to a wide array of tech journalists and other public figures, raising serious concerns about intellectual property rights and personal consent in the age of AI. Grammarly's parent company, Superhuman, defended the practice by stating that the experts appear because their published works are publicly available and widely cited. However, this defense sidesteps the ethical implications of leveraging personal identities to lend credibility to an AI feature without explicit consent.

Further compounding the issue are significant problems with the feature's sourcing. Users attempting to 'explore more deeply' into the experts' work often encountered frequent crashes, links to spammy copies of legitimate websites, archived pages that were not the original source, or even completely unrelated links. This not only undermines the credibility of the AI-generated suggestions but also suggests a lack of rigor in how the AI model attributes its 'inspiration.' In some instances, suggestions attributed to one named expert were found to be based on another person's work, indicating a fundamental flaw in the attribution mechanism.

This situation highlights a critical ethical dilemma for AI developers: how to balance the utilization of vast public data for training and feature development with the imperative to respect individual identities, intellectual property, and the need for transparent, accurate sourcing. The incident underscores the potential for AI tools to inadvertently (or intentionally) create reputational risks, spread misinformation, and erode user trust if ethical guardrails and robust verification processes are not rigorously implemented.
AI-assisted intelligence report · EU AI Act Art. 50 compliant

Impact Assessment

This incident raises significant ethical and legal questions regarding intellectual property, consent, and the responsible use of public data by AI tools. It highlights the potential for reputational harm, misinformation, and a lack of transparency in how AI models attribute and source their 'inspiration,' eroding trust in AI-powered services.

Key Details

  • Grammarly's 'expert review' feature, launched in August, offers AI-generated writing advice 'inspired by' subject matter experts.
  • The feature named numerous living journalists and deceased professors, including staff from The Verge, without their consent.
  • Grammarly's parent company, Superhuman, stated experts appear because their published works are publicly available and widely cited.
  • The feature's 'sources' often linked to spammy, archived, or unrelated websites, and expert descriptions contained inaccuracies.
  • Some AI suggestions attributed to one expert appeared to be based on another person's work, revealed by source links.

Optimistic Outlook

This controversy could prompt AI developers to establish clearer guidelines for obtaining consent and attributing sources when leveraging public identities and works. It may lead to improved transparency features in AI tools, allowing users to verify information and ensuring greater respect for intellectual property rights and personal branding.

Pessimistic Outlook

The unauthorized use of identities and flawed sourcing could set a dangerous precedent for AI tools, normalizing the appropriation of public personas without consent or accurate attribution. This could lead to widespread distrust in AI-generated content, potential legal challenges, and a chilling effect on individuals' willingness to share their expertise publicly.

Stay on the wire

Get the next signal in your inbox.

One concise weekly briefing with direct source links, fast analysis, and no inbox clutter.

Free. Unsubscribe anytime.

Continue reading

More reporting around this signal.

Related coverage selected to keep the thread going without dropping you into another card wall.