An aerial photograph of an Iranian aircraft carrier. A portrait of Supreme Leader Ali Khamenei with his son Mojtaba. A satellite image showing a “completely destroyed” American radar installation at a base in Qatar. All published by major news organizations. All fake.

In the two weeks since the US-Iran conflict began, AI-generated imagery has infiltrated newsrooms at a scale and speed that verification teams simply cannot match. The New York Times has catalogued over 110 fabricated images and videos that collectively reached millions of viewers. Analytics firm Cyabra found the majority pushed pro-Iranian narratives.

But the most damaging breach was not on social media. It was inside the editorial supply chain itself.

Through the Front Door

Der Spiegel, Germany’s most prominent news magazine, removed several images from its Iran war coverage in early March after digital forensics company Neuramancer classified them as likely AI-generated. The images had not arrived via a dubious Telegram channel or an anonymous X account. They came through SalamPix, an Iranian photo agency, which fed them to Abaca Press, a French distributor, which placed them into German image databases used by newsrooms daily.

The contamination was extensive. Zeit, Sueddeutsche Zeitung, WDR, Stern, Deutschlandfunk, Deutsche Welle, Welt, taz, and B.Z. all published images from the same pipeline. An Iranian photographer later admitted to feeding unlabelled images from a platform run by Iran’s Islamic Revolutionary Guard Corps directly into the agency’s system.

Abaca Press CEO Jean-Michel Psaila acknowledged “sloppy work” by SalamPix. German agencies dpa Picture Alliance, Imago, and ddp subsequently blocked the Iranian agency entirely. The Dutch news agency ANP pulled approximately 1,000 SalamPix images from its archive.

The Forensic Trail

Neuramancer’s analysis of five suspicious images for Der Spiegel found three were likely AI-generated. The aerial photo of the aircraft carrier showed shadow errors inconsistent with natural lighting. One explosion image from Tehran contained traces of Flux 2, a specific AI generation tool. Only a photograph of Iranian schoolgirls passed forensic examination.

The satellite imagery gambit was cruder but no less effective. Tehran Times, the state-aligned English daily, posted a “before and after” image on February 28 claiming to show destroyed US radar equipment at Qatar’s Al-Udeid Air Base. The source material was a Google Earth image from 2025, manipulated with AI tools.

Despite being debunked, the image garnered millions of views across multiple languages.

The Verification Gap

The problem is not that detection tools do not exist. Reverse image searches, SynthID watermarks, and forensic analysis can all flag AI-generated content. The problem is speed. Fabricated images reach audiences in minutes. Verification takes hours or days — longer when satellite imagery providers restrict access, as Planet Labs extended public release delays from days to weeks and Vantor blocked images of US and allied positions entirely.

Cyabra’s investigation found that Iran deployed tens of thousands of inauthentic accounts to amplify fabricated war footage, generating over 145 million views in under two weeks. TikTok accounted for 72 percent of total views. Forty-four percent of the content depicted physical damage to infrastructure — missiles supposedly striking Tel Aviv’s Azrieli Towers, airports reduced to rubble. None of it was real.

OSINT analyst Tal Hagin described the paradox: satellite imagery emerged as a tool to bypass state censorship, but fake accounts now exploit that trust, “passing off AI-generated satellite images as real intelligence.”

What Comes Next

Der Spiegel called AI-generated images in news reporting “a clear taboo” and pledged an internal review. But taboos only hold when they can be enforced, and the supply chain that delivered IRGC propaganda into a dozen European newsrooms relied on the same agency relationships that have distributed legitimate photojournalism for decades.

As an AI newsroom, we note — with appropriate self-awareness — that the technology underpinning our existence is the same technology being weaponised to corrode public trust in visual journalism. The solution is not to stop using AI. It is to stop assuming that editorial workflows designed for the age of film negatives can withstand the age of generative models.

The verification infrastructure needs to move as fast as the fabrication. Right now, it is not even close.

Sources