AI fakes about Iran-U.S. war swirl on X despite policy crackdown

japantimes.co.jp·Anuj Chopra, Sumit Dubey, Maria Clara Pestre
View original article
0out of 100
Moderate — some persuasion patterns present

This article aims to convince you that AI-generated disinformation, especially in places like the Middle East, is a serious and growing problem. It uses alarming examples like fake videos on social media and relies on statements from researchers and X's new policy to make its case seem more credible. However, it doesn't really talk about other kinds of misinformation, which might make you think AI fakes are the only or biggest issue out there.

FATE Analysis

Four dimensions of psychological manipulation: how content captures Focus, exploits Authority, triggers Tribal identity, and engineers Emotion.

Focus5/10Authority4/10Tribe2/10Emotion3/10
FFocus
0/10
AAuthority
0/10
TTribe
0/10
EEmotion
0/10

Focus signals

unprecedented framing
"a surge of lifelike deepfakes despite a policy crackdown to curb wartime disinformation.The Middle East war has unleashed an avalanche of AI-generated visuals, dwarfing anything seen in previous conflicts and often leaving social media users unable to distinguish fabrication from reality, researchers say."

The article uses phrases like 'surge of lifelike deepfakes' and 'dwarfing anything seen in previous conflicts' to frame the current situation as unique and unprecedented, grabbing attention by presenting it as a novel and significant development in warfare and disinformation.

Authority signals

expert appeal
"often leaving social media users unable to distinguish fabrication from reality, researchers say."

The article references 'researchers' to lend credibility to the claim that AI-generated visuals are indistinguishable from reality, using their implied expertise to bolster the assertion without specifying who these researchers are or their credentials.

institutional authority
"In a bid to protect "authentic information" during conflicts, X announced last week that it would suspend creators from its revenue sharing program for 90 days if they post AI-generated war videos without disclosing they were artificially made."

The article cites X (formerly Twitter) as an institutional authority taking action against disinformation, using its platform policy as evidence of the seriousness of the issue and implicitly aligning with its stance on 'authentic information'.

Emotion signals

fear engineering
"AI-created videos circulating on Elon Musk's X depict American soldiers captured by Iran, an Israeli city in ruins, and U.S. embassies ablaze"

This imagery describes alarming scenarios (captured soldiers, ruined city, ablaze embassies) which are designed to evoke fear and concern about the potential impact of deepfakes on geopolitical stability and national security, even if the article aims to report on the deepfakes themselves.

Narrative Analysis (PCP)

How the article reshapes thinking: Perception (what beliefs are targeted), Context (what information is shifted or omitted), and Permission (what behavior is being encouraged).

What it wants you to believe

The article aims to instill the belief that AI-generated disinformation is a significant and escalating threat, particularly in conflict zones like the Middle East. It targets the reader's belief in the reliability of online information.

Context being shifted

The article shifts the context from traditional forms of misinformation or propaganda to focus specifically on the novel and advanced threat of AI-generated content. This frames the current conflict as uniquely susceptible to technologically sophisticated deception, making the problem seem more urgent and difficult to combat for the average user.

What it omits

The article omits context regarding the prevalence and impact of non-AI generated misinformation and propaganda in conflicts, both historically and currently. By focusing exclusively on AI, it implicitly suggests that this is the primary or most dangerous form of disinformation currently affecting the conflict, potentially downplaying other sources of false or misleading information.

Desired behavior

The reader is nudged towards increased skepticism and caution regarding visual content encountered on social media, especially concerning conflicts. It also subtly prepares the reader to accept platforms' efforts to regulate such content, as evidenced by X's policy.

SMRP Pattern

Four manipulation maintenance tactics: Socializing the idea as normal, Minimizing concerns, Rationalizing with logic, and Projecting blame.

-
Socializing
-
Minimizing
-
Rationalizing
-
Projecting

Red Flags

High-severity indicators: silencing dissent, coordinated messaging, or weaponizing identity to shut down debate.

-
Silencing indicator
-
Controlled release (spokesperson test)
-
Identity weaponization

Techniques Found(1)

Specific propaganda techniques identified using the SemEval-2023 academic taxonomy of 23 techniques across 6 categories.

Loaded LanguageManipulative Wording
"an avalanche of AI-generated visuals"

The term 'avalanche' is emotionally charged and creates a sense of overwhelming volume and potential destruction, disproportionate to simply stating there's been an increase in AI-generated visuals.

Share this analysis