Poland Demands EU Action Against TikTok Over AI-Generated 'Polexit' Disinformation
Sonic Intelligence
The Polish government has urged the European Union to investigate TikTok over AI-generated videos advocating for 'Polexit' and criticizing its pro-EU government, asserting the content is Russian disinformation. This move invokes the EU's Digital Services Act (DSA), questioning TikTok's moderation mechanisms for AI-generated content.
Explain Like I'm Five
"Imagine someone used a clever computer program (AI) to make fake videos of young women talking about Poland leaving the big European club, all on TikTok. The Polish government thinks this is sneaky bad talk from another country (Russia) trying to trick people. So, they told the big EU bosses that TikTok needs to do a much better job stopping these fake videos, like a referee stopping cheating in a game."
Deep Intelligence Analysis
Deputy digital affairs minister Dariusz Standerski highlighted the alarming scale of this activity, suggesting it constitutes an organized disinformation campaign. Government spokesman Adam Szłaka unequivocally labeled it 'Russian disinformation,' pointing to Russian syntax within the video scripts as evidence. This attribution places the incident within a broader geopolitical context of foreign interference in European democracies.
In response, Standerski has dispatched a letter to Henna Virkkunen, the European Commission for tech sovereignty, security, and democracy. The letter explicitly requests the initiation of proceedings against TikTok under the EU’s Digital Services Act (DSA). The DSA, a landmark EU regulation enacted in 2022, is designed to enhance the accountability, moderation, and transparency of digital services. Standerski argues that the AI-generated 'Polexit' videos pose a significant threat to public order, information security, and the integrity of democratic processes within both Poland and the wider EU.
A critical aspect of Poland's complaint is the assertion that TikTok has failed to implement adequate mechanisms for moderating AI-generated content and has not ensured effective transparency regarding the origin of such materials. This alleged failure, according to the minister, directly undermines the DSA's objectives concerning disinformation prevention and user protection. The precedent for DSA enforcement was recently set when social media platform X was fined €120 million for non-compliance, illustrating the EU's commitment to upholding the regulation. The targeted TikTok channel, despite its existence since May 2023 under various names, has reportedly been removed following numerous user complaints. This incident underscores the urgent challenge posed by sophisticated AI-powered disinformation and tests the robustness of new digital regulations in safeguarding democratic integrity.
Impact Assessment
This incident underscores the growing threat of AI-generated disinformation on social media, particularly when wielded by state actors to influence democratic processes. It tests the enforcement power of the EU's Digital Services Act (DSA) and highlights the urgent need for platforms to implement robust moderation against sophisticated, rapidly spreading deceptive content.
Key Details
- ● DSA went into force in 2022
- ● X fined €120 million for DSA non-compliance
- ● TikTok account existed since May 2023
- ● Deputy digital affairs minister Dariusz Standerski sent a letter on December 30, 2025
- ● A relevant tweet was posted on December 28, 2025
Optimistic Outlook
The swift action by the Polish government and the invocation of the DSA demonstrate a proactive stance by EU member states and regulatory bodies against digital disinformation. Successful enforcement could set a precedent for holding platforms accountable, fostering a safer information environment and strengthening democratic resilience against foreign interference.
Pessimistic Outlook
The ease with which AI-generated disinformation can be produced and spread, coupled with the potential for platforms to lag in moderation, indicates a challenging future for information security. Persistent campaigns could erode public trust, polarize societies, and undermine democratic integrity, while enforcement actions may prove insufficient to stem the tide of sophisticated AI-powered influence operations.
Get the next signal in your inbox.
One concise weekly briefing with direct source links, fast analysis, and no inbox clutter.
More reporting around this signal.
Related coverage selected to keep the thread going without dropping you into another card wall.