This week, an AI-generated rap featuring Angela Rayner racked up millions of views and tens of thousands of reactions on Facebook. Wearing a gold chain, Adidas tracksuit and handling suspiciously blurry-looking banknotes, it is obvious to most viewers that it is the product of generative AI. The creators, the Crewkerne Gazette, who run satire pages on Facebook and YouTube, have made a series of parody songs – all AI-generated – featuring other notable figures such as Keir Starmer, Nigel Farage and King Charles. With attention from millions of social media users, and even the national press, they are now pushing to get the song to the top spot in the UK Top 40.
You can’t blame them. Low-effort, inflammatory, part-satire, part-commentary “AI slopaganda” has been flooding social media for months now. It has proved to be an effective way to get attention, money and political influence online.
Many of these videos are not clearcut satire. They mimic on-the-ground news reports, depicting interviews with small boat arrivals, or purport to be vlogs from the Channel crossings themselves. In these videos, AI migrants say they have come to the UK so that the government will give them money and a new phone, or that they “already have a job at Deliveroo”.
Most commenters are aware of the joke, but some people are still getting duped. One commenter asks: “Is this real? Which news channel was this on please.”
Early last year, when we uncovered more than 100 deepfakes of the then prime minister, Rishi Sunak, we expected that the harm of generative AI tools would stem from their ability to deliberately mislead the public about facts. We thought deepfakes, doppelganger fake news sites or a voice-cloned “hot mic” moment would strike on election day and sway people’s vote based on a fake scandal about a politician.
It turns out we were wrong. The real lever of influence lies in the mass generation of low-quality slop content.
Last year, in a presidential debate, Donald Trump repeated a falsehood that Haitian immigrants in Ohio had been eating people’s pets. After this, hundreds of AI-generated images of the president “rescuing” cats and dogs flooded social media. One of these images, posted by House Judiciary GOP, has 88m views on X.
It is an example of how AI content can push our emotional buttons easily, re-affirming or amplifying existing beliefs with generated imagery. Practically every social media study since 2012 has found that the secret to virality is in making you emotional – whether that’s angry, sad, hopeful or happy.
During the Covid-19 pandemic, when we were fighting health misinformation in 10 Downing Street, we saw this play out with harmful viral narratives. Hope drove people to believe that you could test for Covid by holding your breath. Anger caused people to believe that 5G masts were spreading the virus.
It is not just our own brains to blame, though. Social platforms directly incentivise the creators of AI slopaganda by promoting it and rewarding it.
By directly paying creators of content that drives engagement on X, while relaxing moderation policies, Elon Musk’s platform created a huge incentive for divisive, misleading and shocking content. A similar combination of structural and psychological factors led to Macedonian teenagers making thousands of pounds posting outlandish fake news during the 2016 US presidential election – with no real skin in the game.
Effective political operators will not only be aware of how quickly this type of content can go viral, but they will also be aware of the economic incentives to spread it.
Earlier this year, online conspiracy theorists turned their attention to Keir Starmer and Lord Alli. Posters on X claimed that there was leaked CCTV of them in a compromising position. There was no clip to speak of, but in order to meet the demand for it, a conspiracy TikTok account with no real interest in UK politics quickly generated a fake CCTV clip. In Germany, meanwhile, AI influencers were found to have told their followers to vote for the Alternative für Deutschland (AfD).
after newsletter promotion
It is no surprise that this new form of political communication is being used more by the right than the centre and the left. Inflammatory, low-effort AI videos are essentially a form of “shitposting”, an art form honed by 4chan users since the early 2000s. To get attention online, content doesn’t need to be well crafted, high-effort or even enjoyable to watch – it just needs to wind you up.
Many political communicators are understandably wary of using AI in their strategy, particularly as it has a strong association with a certain political leaning. But, as generative AI starts being baked into people’s phones and jammed into their WhatsApp messages, that will soon change.
Our feeds getting filled with weird-looking AI clips is, unfortunately, inevitable. If platforms stop rewarding anger-inducing posts, we may see a cooling off of this trend. But right now, AI slopaganda is what the algorithms want.
Similar to the millions of image macros and cat memes created in the mid-2000s, AI slop will soon just be part of the political language we all use. We might not like to see it, but, right now, that’s exactly why it goes viral.
-
Marcus Beard is a digital, disinformation and AI specialist, and was a Downing Street official heading No 10’s response to countering conspiracy theories