'A real worry': How AI is making it harder to spot fake images

Fake images of Donald Trump and Pope Francis have emerged recently.

April 7, 2023, 6:08 AM

As cameras captured former President Trump entering Manhattan criminal court earlier this week to face a 34-count indictment, a number of images of him began circulating on social media.

Some of the images, which were fabricated, appeared to be mug shots of the former president, even though his lawyers told reporters the former president did not take a booking photo during police processing on Tuesday.

Trump’s 2024 presidential campaign nonetheless capitalized on the trend, sending out an email advertising a T-shirt with a fake mug shot of Trump that could be purchased online. It's a tactic the Trump team has turned to before, fundraising off both of his impeachments and the FBI raid of Mar-a-Lago in a search for classified documents.

While that image appeared to be a more obvious fabrication, others were markedly more subtle and may have been generated by artificial intelligence, the latest in a series of hyper-realistic fake images deceiving many online and raising concerns over the sophistication and accessibility of AI-powered tools.

PHOTO: Illustration
A deepfake mug shot of former President Donald Trump.
Midjourney / Twitter

Here's what to know about these AI-generated images:

What other images went viral?

A fabricated image of Pope Francis wearing a floor-length, white puffer jacket racked up over 30 million views last week across several posts. It's just the latest in a series of recent images that have flooded social platforms.

"These cases where it feels like the stakes are low are worrying because it shows that when our guard is not up, the general public is more susceptible to fakes," Henry Ajder, an AI researcher who hosts a podcast on the technology on BBC Radio, told ABC News.

The fictional image of Pope Francis was first posted by a user in a Subreddit dedicated to showing works created by an image generator program called Midjourney and was possibly created by the tool.

A fabricated image of Pope Francis wearing a long floor-length, white puffer jacket is the latest in a series of hyper-realistic fakes created using an AI-image generator.
Reddit/Midjourney

Last week, Midjourney announced that because of "extraordinary demand and trial abuse" it has paused the ability for users to generate a certain set of images for free. The service is now only available through a variety of subscription plans.

The tool, which ABC News reviewed, is one of several text-to-image tools powered by artificial intelligence, that allows users to input a natural language description, called a prompt, to get an image in return.

Some tools like OpenAI’s DALL-E 2 don't allow users to create images of public figures. Their content policy also states that users should not upload images of people without their consent.

Why are experts alarmed?

It's the hyper-realism of the images that worries synthetic media experts like Ajder.

"When you are scrolling through social media, these images are subconsciously flying past," he said. "You don't need to critically examine an image for it to impact the way you see a person or see the world."

On the day of Trump's court hearing, ABC News found that thousands of fabricated images of Trump had been generated on the platform. Only a dozen leaped onto social media platforms and circulated more widely, but it wasn’t the first time the public had seen an AI-fake connected to him.

Millions online viewed a series of AI-generated images of Former President Trump that circulated on March 20th, 2023.
Eliot Higgins/Twitter

When asked about the viral images created using their tool, Midjourney founder David Holz told ABC News they are working on more "nuanced moderation policies based on community feedback."

"There are always risks that are hard to predict and the goal has to be to find them, adapt to them, and move forward," Holz added in an email.

On March 20, as news of former President Donald Trump's possible indictment made headlines, a series of fake photos imagining his supposed arrest circulated on Twitter.

The former president had not yet been arrested, but he predicted (incorrectly) just days earlier that his arrest was imminent.

The Trump photos, which falsely depicted events that did not happen, were created by Eliot Higgins, the founder of Netherlands-based investigative news outlet Bellingcat.

"Making pictures of Trump getting arrested while waiting for Trump's arrest," Higgins tweeted on March 20 along with the images. Higgins told ABC News he created the series of images for fun.

Higgins told ABC News he was surprised these fake images of Trump received so much attention, but it was good to see that they encouraged discussion around AI image creation.

What is causing this wave of hyper-realistic fakes?

Experts like Sam Gregory, the executive director at the global human rights network WITNESS, say it's a combination of factors: ease of use and accessibility of these tools, improved photo-realism and the ability to churn out volume.

"This is a real worry," said Gregory, who has spent the last five years leading an initiative to prepare journalists and educate the public on the potential harms of AI-generated media.

Gregory added a commercial arms race between AI-companies is contributing to the rapid development of these tools and the lack of safeguards.

"We're also in the middle of a headlong commercial rush that is completely about the needs of Silicon Valley, and ignoring the needs of most people across the U.S. and frankly, most people across the world who might say wait a second, where are the safeguards here? How are you making sure these are not misused," Gregory said.

What are the solutions?

Ajder said the onus should be on the companies creating the AI technology to limit access by "creating friction for bad actors."

Steps like providing bank details or verifying users' identities through other accounts might make it more challenging for some to misuse the tools, he told ABC News.

Gregory stressed the importance of not putting all the pressure of identifying AI-generated media on the public, but instead focusing on making detection tools widely available.

"We're going to be living with tremendously creative power that's more distributed, more available, more fun in many ways, but we have to really understand how we put these guardrails around it," he said.

Hundreds of top AI researchers along with some of the biggest names in tech signed a letter this week urging labs to immediately pause on training new powerful AI systems for six months, in order to ensure their "effects will be positive and their risks manageable."