Study Reveals Widespread Undisclosed AI Use in Scientific Publications
Sonic Intelligence
A PNAS study found widespread undisclosed AI use in scientific papers.
Explain Like I'm Five
"Imagine if you got help from a super-smart robot to write your school report, but you didn't tell your teacher. A new study found that many grown-up scientists are doing something similar when they write their papers, even though the rules say they should tell everyone. This makes it hard to know who really did all the work."
Deep Intelligence Analysis
The findings indicate that approximately 70% of the journals examined have now implemented official policies regarding generative AI tools, categorizing them into bans, mandatory disclosure requirements, open policies, or no mention. Despite this widespread policy adoption, the study uncovered a striking lack of compliance: since 2023, a mere 0.1% of published papers (76 out of 75,172) explicitly acknowledged the use of AI assistance. This stark contrast highlights a critical transparency gap within the scientific community.
The analysis further identified that the highest growth in AI-assisted content is concentrated in the physical sciences and in non-English-speaking countries, notably China and Brazil. Several factors are posited for this non-disclosure trend. Authors may harbor concerns that admitting AI use could cast doubt on the originality of their work, potentially damaging their professional reputation. Additionally, the ambiguity of existing guidelines often leaves researchers uncertain about when disclosure is truly required, particularly for tasks like grammar checking versus more substantive content generation.
The study's authors emphasize the urgent need for a reevaluation of ethical frameworks. Their conclusion asserts that "current policies have largely failed to promote transparency or restrain AI adoption," advocating for revised guidelines to foster responsible AI integration in scientific research. This report serves as a crucial call to action for academic institutions, publishers, and researchers to address the ethical implications of AI in scholarly work, ensuring the continued integrity and trustworthiness of scientific communication.
Impact Assessment
The significant gap between AI policy prevalence and actual disclosure rates highlights a growing transparency crisis in scientific publishing. This lack of acknowledgment undermines research integrity, raises questions about authorship, and could erode public trust in scientific findings.
Key Details
- A study analyzed over 5.2 million papers from 5,114 journals published between 2021 and 2025.
- Approximately 70% of examined journals now possess official AI usage policies.
- Only 0.1% (76 out of 75,172) of papers published since 2023 disclosed AI tool usage.
- AI-assisted writing growth is highest in physical sciences and non-English-speaking nations like China and Brazil.
- The report was published in PNAS on March 6, 2026, by Yongyuan He and Yi Bu.
Optimistic Outlook
Increased awareness from this study could prompt journals and institutions to develop clearer, more enforceable AI disclosure policies. This could lead to a more transparent and ethically sound integration of AI in research, ultimately enhancing the quality and credibility of scientific output by establishing new best practices.
Pessimistic Outlook
If the trend of non-disclosure continues, it could foster an environment of distrust within the scientific community and among the public. Ambiguous policies and author concerns about reputation might hinder the development of robust ethical guidelines, potentially leading to a decline in the perceived originality and reliability of academic work.
Get the next signal in your inbox.
One concise weekly briefing with direct source links, fast analysis, and no inbox clutter.
More reporting around this signal.
Related coverage selected to keep the thread going without dropping you into another card wall.