AI-Fueled 'Slopaganda' Deepens US-Iran Conflict, Threatening Informed Public Discourse
The rise of AI-generated propaganda amplifies existing tensions between the US and Iran, exploiting social media algorithms to spread disinformation and undermine informed public discourse.

The escalating information war between the United States and Iran has entered a concerning new phase with the proliferation of AI-generated propaganda, or "slopaganda." This manipulative tactic, leveraging artificial intelligence to create deceptive content, raises serious questions about the future of informed public discourse and the potential for social division.
Following US-Israeli strikes on Iran in early March 2026, the White House disseminated a video blending real attack footage with clips from popular entertainment, potentially glorifying violence and desensitizing the public. In response, Iran amplified its own propaganda efforts, including the spread of outdated war footage and AI-generated content depicting attacks on Tel Aviv and US bases. These actions further escalate tensions and contribute to a climate of mistrust and animosity.
One particularly troubling example is the viral Lego figurine videos, reportedly crafted by Iranians, portraying figures like Donald Trump, Benjamin Netanyahu, and Satan in a cartoonish and provocative manner. This "slopaganda," as coined in a Filosofiska Notiser paper, is designed to manipulate emotions, attention, and memory to achieve political objectives. Such tactics exploit vulnerabilities in our media landscape, prioritizing emotional responses over factual accuracy.
The use of AI to generate propaganda far exceeds initial concerns. In October 2025, former President Trump shared an AI-generated video depicting himself as a fighter jet pilot dropping feces on protesters, normalizing violence against dissenting voices. Similarly, his vision of a gaudy presidential library reinforced an image of elitism and detachment from the needs of ordinary citizens. These examples demonstrate the potential for AI to be weaponized against vulnerable populations and used to further a divisive political agenda.
The insidious nature of "slopaganda" lies in its ability to bypass our critical thinking faculties. By flooding social media with attention-grabbing and emotionally charged content, it can erode our capacity for rational analysis and reinforce pre-existing biases. This dilution of the "epistemic environment" with falsehoods and half-truths makes it increasingly difficult for individuals to access reliable information and form informed opinions.
As argued by philosophers, AI tools like ChatGPT can be misused to generate content indifferent to truth, effectively functioning as "machines for bullshit." "Slopaganda" represents a particularly dangerous manifestation of this phenomenon, leveraging AI to create symbolic associations designed to manipulate emotions and create divisions. The Lego videos, for example, aim to associate the U.S. with evil, contributing to a global narrative of mistrust and animosity.
