How verified accounts helped make fake images of a Pentagon explosion go viral
The image contains hallmarks of being generated using a text-to-image AI tool.
Verified accounts on Twitter may have contributed to the viral spread of a false claim that an explosion was unfolding at the Pentagon.
Around 8:42 AM on Monday, a verified account on Twitter, labeling itself as a media and news organization, shared a fake image of smoke billowing near a white building they said was the Pentagon. The tweet's caption also misrepresented the Pentagon's location.
No such incident took place, the Arlington County Fire Department later said on Twitter. The Pentagon, the headquarters building of the U.S. Department of Defense, is located in Arlington County, Virginia.
A Pentagon spokesperson also told ABC News that no explosion had occurred.
But throughout the morning, the fake image and misleading caption picked up steam on Twitter. Cyabra, a social analysis firm, analyzed the online conversation and found that roughly 3,785 accounts had mentioned the falsehoods, dozens of these were verified.
"The checkmark may well have contributed to giving the account the appearance of authenticity, which would have helped it with achieved virality," Jules Gross, a solutions engineer at Cyabra, told ABC News.
Some of these accounts were verified, but they didn't appear to be coordinated, according to Cyabra.
"The bad news is that it appears that just a single account was able to achieve virality and cause maximum chaos," Gross added.
While ABC News has not been able to determine the source of the content, nor confirm that the original tweet was the 8:42 tweet, the image contains many hallmarks of being generated using a text-to-image AI tool.
There are many visual inconsistencies in the image, including a streetlamp that appears to be both in front and behind the metal barrier. Not to mention that the building itself doesn't look like the Pentagon.
Text-to-image tools powered by artificial intelligence allow users to input a natural language description, called a prompt, to get an image in return.
In the last few months, these tools have become increasingly sophisticated and accessible, leading to an explosion of hyperrealistic content fooling users online.
The original false tweet was eventually deleted, but not before it was amplified by a number of accounts on Twitter bearing the blue check that was once reserved for verified accounts, but which can now be purchased by any user.
ABC News could not immediately reach a spokesperson for Twitter to request comment.
What are the solutions?
"Today's AI hoax of the Pentagon is a harbinger of what is to come," explained Truepic CEO Jeff McGregor, who says his company's technology can add a layer of transparency to content posted online.
Truepic, a founding member of the Coalition for Content Provenance and Authenticity, has developed a camera technology that captures, signs, and seals critical details in every photo and video, such as time, date, and location.
The company also created tools that would allow users to hover over a piece of AI-generated content to find out how it was fabricated. In April, they published the first "transparent deepfake" to showcase how the technology works.
While some companies have adopted the C2PA technology, it's now up to social media platforms to make that information available to their users.
"This is an open-source technology that lets everyone attach metadata to their images to show that they created an image, when and where it was created, and what changes were made to it along the way," Dana Rao, general counsel and chief trust officer at Adobe, told ABC News. "This allows people to prove what's real."
Alterations would be identified. For example, if an image was cropped or filtered, that information could be displayed, but the user would also be able to select how much data they make available to the public.
Both state and local law enforcement were provided a written briefing Monday by the Institute for Strategic Dialogue, an organization dedicated to countering extremism, hate and disinformation, with details on the incident.
"Security and law enforcement officials are increasingly concerned there's an increased concern in AI-generated information operations intended to undermine credibility in government, stoke fear or even incite violence," said John Cohen, an ABC News contributor and former acting undersecretary for intelligence.
"Digital content provenance will help mitigate these events by scaling transparency and authenticity in visual content by empowering users and creators," added McGregor.