Social media users report a rise in bizarre AI-generated images. What does this tell us about the direction of viral content?

violence,women,Turkey

The AI-generated viral “Shrimp Jesus” image.

From 24 to 31 October, the world marks Global Media and Information Literacy Week, an annual event first launched by UNESCO in 2011 as a way for organizations around the world to share ideas and explore innovative ways to promote Media and Information Literacy for all.

For this year’s theme — “The New Digital Frontiers of Information: Media and Information Literacy for Public Interest Information” — News Decoder presents a series of articles and a Decoder Dialogue webinar on different aspects of media literacy.

We launched this series 24 October with a look at an effort in Finland to make media literacy a core component of primary education. On 25 October we explored the concept of media framing and ways news can shape thought. On 28 October, we presented a compilation of articles on media literacy. We will end the series 30 October with a Decoder Replay of an article that looks at the ramifications of using labels to identify groups of people.

Today we look at the role of artificial intelligence in spam content and disinformation. In addition to the article, today at 18:00 CET we’re hosting a Decoder Dialogue: From Newsrooms to Classrooms: Real Talk About Artificial Intelligence, an online roundtable that will bring together experts and students to talk about fears and hopes for the new technology.

Earlier in October I noticed an image on my Facebook timeline that looked unlike anything I’d ever seen before. It showed a room with a large, white couch around a wicker table and a plunge pool. Beside it was a dining area with modern furniture and glass doors. The room was light, spacious and, perhaps most enticingly, in a cave overlooking a sea with turquoise water.

What made the image even more unusual was the details of the furniture I noticed when I zoomed in closely. Some of the chairs and tables looked like they were floating in the air while other chairs appeared to be disconnected from their legs — not impossible, but not probable for a luxury cave hotel by the Mediterranean.

If I wasn’t already convinced that this image was AI-generated, a look at the Facebook account would give me extra hints. The account posts tens of photos daily of beautiful rooms with weird details, over-the-top luxury real estate and misshapen exotic fruit. However, what other people report seeing on their timelines lately can get much weirder than that.

The bizarre trend of “AI slop”

One example in particular has become widely discussed: an image of a figure with the head of Jesus and the body of a shrimp. It was first posted around March 2024 and was dubbed “Shrimp Jesus” after becoming viral. There are other versions out there: Potato Jesus, Carrot Jesus, Jesus in scrambled eggs, Jesus floating on a grapefruit boat, Jesus riding on a paprika tiger or a seahorse made of plastic Pepsi bottles.

You might also see some images with a family of sad-looking cats, false historical photos or birthday images of computer-generated elderly people asking to leave a comment with warm wishes. Not to mention an account on Instagram for a non-existent restaurant in Austin, Texas with all AI-generated food images and nearly 80,000 followers.

Such AI-generated spam visuals are frequently called “AI slop.” Some of the accounts producing AI slop have hundreds of thousands of followers, but how many of them are authentic users is unclear. Stanford Internet Observatory researchers report that AI slop can sometimes reach the top 20 list of most popular content on Facebook. One post in 2023 with an AI-generated image reached 40 million viewers.

Platforms reward spam.

You don’t have to look for Shrimp Jesus. It finds you.

Meta Transparency Center data shows that in the second quarter of 2024, 30% of the content on U.S. Facebook users’ timelines came from sources they don’t follow and aren’t connected to. Instead, they came from Facebook’s AI recommendation systems that can sometimes push AI content on users.

Spammy AI images have gone viral during high-stakes news events too. When Hurricane Helene hit Florida in September, a fake image of a girl in a lifeboat holding a puppy received millions of views on X. This year, odd and clearly fake AI images and videos were spread around the globe in the context of elections. News Decoder wrote about some of them.

Since they’re relatively easy to spot as inauthentic, the fact that they’re widespread can be puzzling. But not to Victoire Rio, executive director of What to Fix, a tech policy and accountability nonprofit. In a recent report, they focused on how social media content gets monetized by bad actors.

“The platforms are central to the problem due to their incentive structure,” Rio said.

Owners of popular accounts can monetize not only through subscriptions, brand deals or redirecting people to sites where they sell products or services, but also by joining programs sharing ad revenue with creators on the platform. What to Fix estimates that platforms redistribute around $20 billion USD from ad revenue to about five million social media accounts annually.

A large share of this money doesn’t go to quality content creators or even lifestyle influencers with large audiences. It goes to users who have learned to game this system.

“A vast volume of those accounts use automated processes to pump out content. It used to be primarily stolen content, now it’s AI generated,” Rio said. In other words, whatever is cheaper and quicker to post.

Engineers of virality

“We see people running 10-13,000 accounts out of software which allows them to manage fake accounts,” said Rio. “That’s how they’re able to game the algorithm and get their content recommended. They learn the right formula of how many views you need in the very early stage of engagement, just enough to trigger the algorithmic recommendation.”

Technology-focused media outlet 404 Media reported that many people globally have made it their business to not only profit from those programs, but to sell tools and courses for future spammers.

Rio argues that this is all possible because there is too little oversight on accounts joining the ad revenue sharing programs. Eligible accounts that fulfill criteria like a minimum follower count get an automatic message prompting them to join. Although there is a review process, there is a lack of clarity around platforms’ rules and their enforcement, What to Fix argues in their report. For example, one of the accounts admitted to the program was copying and only posting content from the Russian propaganda outlet Russia Today, despite copyright violations and the fact that the EU has banned its content.

Real media outlets have a hard time benefiting from these revenue-sharing funds. They can’t compete with content produced so cheaply and distributed on so many accounts.

“When you hear discussions at media conferences about why certain content isn’t performing well, they focus on clickbait titles or better thumbnails,” Rio said. “But that’s based on an erroneous diagnosis of why this content goes viral. What’s actually happening is a clever use of automation that’s deeply unethical.”

Spam news, spam mycology books and spam fiction

The problem of AI spam is not exclusive to Meta-run platforms. NewsGuard, an organization tracking the spread of misinformation, has been monitoring how more and more sites with AI-generated content mimic news organizations. They’ve noticed over one thousand such websites now. They’ve also caught a network of TikTok accounts posting AI-generated celebrity gossip and misinformation on European politics and the war in Ukraine in English and French.

“Many of these videos were exactly one minute long or one minute and one second long,” said NewsGuard Senior Analyst Natalie Huet. “That’s the threshold you need to hit to be eligible for the Creator Fund,” TikTok’s monetary reward program to its creators.

“The accounts we selected had published over 9,000 videos in just over a year and had collectively amassed 380 million views,” said Huet. “A lot of these accounts have tens of thousands of followers.”

The issue is also broader than just social media platforms. Wired reported that Google News often ranks spam pages higher than real news. The Guardian described how mushroom foragers now have to be cautious not to buy AI-generated books on Amazon with false and dangerous tips, including giving mushrooms a taste test to identify them. The staff at a speculative fiction online magazine Clarkesworld have gotten overwhelmed sorting through a rapidly increasing amount of AI-generated submissions, New York Magazine wrote. The podcast search engine Listen Notes has noticed an influx of low-quality podcasts.

Race to the bottom

The profit motive behind problematic content isn’t new, it’s just supercharged, said David Evan Harris, a Chancellor’s Public Scholar and lecturer at the University of California, Berkeley who teaches several courses on AI, ethics and social media. Harris is a former responsible AI researcher at Meta.

“There’s a lot of information about commercially-motivated fake political websites from the 2016 election in the United States,” Harris said. “This pre-dates generative AI and it’s a very common strategy for people all over the world to optimize for low-quality news.”

What has changed is the scale and ease of content creation. And the environmental cost of this content flood adds another troubling dimension.

“If you look at the footprint of this activity, it’s quite significant for absolutely no reason,” Rio said. “We know there’s a huge cost and you have massive abuse of the system for zero value.”

AI systems are known to be energy-intensive. The International Energy Agency reports that, “a request made through ChatGPT, an AI-based virtual assistant, consumes 10 times the electricity of a Google Search.” In Ireland, which has become a global tech hub, the agency estimates that by 2026 data centers will account for 35% of the country’s energy use.

When multiplied across millions of spam posts, the environmental impact becomes substantial.

But the solutions are expected from platforms that, as a media advocacy group Free Press put it in their report, “have deprioritized content moderation and other user trust and safety protections.”

Harris explained this through a metaphor he said is commonly used in tech companies. “Imagine a bear chasing a group of people. The bear is the regulators, and the people being chased are the tech companies,” he said. “The tech companies figured out that the bear is only hungry enough to eat one. The best thing to do if you want to maximize your profits is to be the second slowest, the second to last. You optimally maximize profit by doing as little as you can on trust and safety without being the worst in the industry.”

This race to the bottom is already visible. X has dramatically reduced its content moderation capacity by laying off about 80% of its staff in the trust and safety teams, Harris pointed out. And Meta’s founder Mark Zuckerberg has expressed admiration for X’s staff cuts and since has also laid off tens of thousands of employees over the last few years.

A large reason is that Meta is not a front-runner in the race of AI technology development, Harris said. “Zuckerberg needed to demonstrate that he was willing to cut staff and make his company more profitable.”

But Rio thinks that the focus on moderation when looking at solutions is wrong because they should rather address the incentives.

“A lot of people in the tech policy space are stuck in this conversation around content and are not getting that fundamentally, it is content that is getting generated in the first place because of the incentive structure,” Rio said. “It should be the responsibility of the platforms to address the incentives,” she explained.

The solution, Rio suggests, requires rethinking how we and the platforms value online content.

“We’ve let social media companies define the value of content on the basis of engagement,” Rio said. “As a society, we probably should take back control and reclaim how we value information.”

Three questions to consider:

  1. What is “AI slop”?
  2. What are the dangers of AI-produced content and the volume of content being produced?
  3. How has this content production affected the way in which tech companies operate?
Sabine Bērziņa

Sabīne Bērziņa is News Decoder’s user experience manager for its Promoting Media Literacy & Youth Citizen Journalism through Mobile Stories project. She is a media literacy curriculum and tool developer, as well as a journalist with nine years of experience in the media industry. She lives and works in Latvia.

Share This
CultureHow “Shrimp Jesus” and fake books help AI spammers lure users