Meta says AI-generated election content is not happening at a “systemic level”

Meta has seen strikingly little AI-generated misinformation around the 2024 elections despite there having been major elections in countries such as Indonesia, Taiwan, and Bangladesh, said the company’s president of global affairs Nick Clegg on Wednesday.  “The interesting thing so far — I stress, so far— is not how much, but how little AI-generated content…

Meta says AI-generated election content is not happening at a “systemic level”

Meta has seen strikingly little AI-generated misinformation around the 2024 elections despite there having been major elections in countries such as Indonesia, Taiwan, and Bangladesh, said the company’s president of global affairs Nick Clegg on Wednesday. 

“The interesting thing so far — I stress, so far— is not how much, but how little AI-generated content [there is],” said Clegg during an interview at MIT Technology Review’s EmTech Digital conference in Cambridge, Mass.  

“It is there, it is discernible. It’s really not happening on … a volume or a systemic level,” he said. Clegg said Meta has seen attempts at interference in, for example, the Taiwanese election, but that the scale of that interference is at a “manageable amount.” 

As voters will head to polls this year in more than 50 countries, experts have raised the alarm over AI-generated political disinformation, and malicious actors using generative AI and social media to interfere with elections. Meta has previously faced criticism over its content moderation policies around past elections, for example when it failed to prevent the January 6 rioters from organizing on its platforms. 

Clegg both defended the company’s efforts at preventing violent groups from organizing, and also stressed the difficulty of keeping up. “This is a highly adversarial space. You play Whack-a-Mole, candidly. You remove one group, they rename themselves, rebrand themselves, and so on,” he said. 

Clegg argued that the company is “utterly different” when it comes to moderating election content than it was in 2016. Since then, the company has removed over 200 “networks of coordinated inauthentic behavior,” he said. The company now relies on fact checkers and AI technology to identify unwanted groups on its platforms. 

Earlier this year, Meta announced it would label AI-generated images on Facebook, Instagram, and Threads. Meta has started adding visible markers to AI-generated images, as well as invisible watermarks and metadata in the image file on images. The watermark will be added to images created using Meta’s generative AI systems or ones that carry invisible industry standard markers. The company says its measures are in line with best practices laid out by the Partnership on AI, an AI research nonprofit.

But at the same time, Clegg admits that tools to detect AI-generated content are still imperfect and immature. Watermarks in AI systems are not adopted industry-wide, and are easy to tamper with. They are also hard to do robustly in AI-generated text, audio and video. 

Ultimately that should not matter, Clegg says, because Meta’s systems should be able to catch and detect mis- and disinformation regardless of its origins. 

“AI is a sword and a shield in this,” he says.

Clegg also defended the company’s decision to allow ads claiming the 2020 US election was stolen, noting that these kinds of claims are common throughout the world and that it’s “not feasible” for Meta to litigate past elections.

You can view the full interview with Nick Clegg and MIT Technology Review executive editor Amy Nordrum below.