Meta won’t say if politicians can post AI-made fakes without warnings
Late last month, political campaign operatives wrote to Meta, the owner of Facebook and Instagram, asking how the social media giant planned to address AI-generated fake images on its platforms.
The inquiry, described by people with knowledge of it, testified to growing concern about AI technology’s impact on American democracy among some of the top strategists preparing for the 2024 election. Recent visuals portraying fictional details of former president Donald Trump’s indictment in New York — including his marching with protesters and being detained on the streets — have illustrated the increasing sophistication and accessibility of AI image generators.
According to people familiar with the exchange, who spoke on the condition of anonymity to share details of private communications, a Meta employee replied to the operatives saying that such images, rather than being treated as manipulated media and removed under certain conditions, were being reviewed by independent fact-checkers who work with the company to examine misinformation and apply warning labels to dubious content. That approach unsettled the campaign officials, who said fact-checkers react slowly to viral falsehoods and miss content that is rapidly duplicated, coursing across the online platform.
The approach may also come with a significant carve-out for candidates, officeholders and political parties. Meta has exempted politicians from fact-checking, under a system that company executives have defended by saying that political speech is sacrosanct.
Meta representatives did not respond to questions from The Washington Post this week about whether the fact-checking exemption granted to politicians includes AI-generated images. Meta spokeswoman Dani Lever would only point to the company’s policies on fact-checking, which explain how the company approaches material that has been “debunked as ‘False, Altered, or Partly False’ by nonpartisan, third-party fact-checking organizations.” The guidelines say nothing about AI-generated media and who may post it.
AI-generated images introduce a new dynamic in the fraught debate over political speech that has roiled the technology giants in recent years. There are unsettled questions about who created — and who owns — such content, which has been treated differently on different social networking sites.
Twitter has a rule against “synthetic, manipulated, or out-of-context media,” though it has been enforced unevenly. The company, which is owned by Elon Musk, joins Facebook in explicitly giving special consideration to “elected and government officials” whose posts might otherwise violate the platform’s policies.
TikTok, meanwhile, has a broad ban on images and videos “that mislead users by distorting the truth of events and cause significant harm,” though short-form videos from the popular app bounce quickly onto other sites. And Google prohibits content that “deceives users through manipulated media related to politics, social issues, or matters of public concern.”
The question of politicians posting AI-generated images is no longer entirely theoretical. Last month, Trump posted an AI-generated image of him praying on his social networking site, Truth Social. His account on Facebook was restored earlier this year — following a two-year ban made necessary, in Facebook’s telling, by “his praise for people engaged in violence at the Capitol on January 6, 2021.” And recent posts disseminating campaign statements and promoting rallies to his 34 million followers on Facebook make clear that he intends to use the platform as part of his White House bid, as he did in his previous campaigns.
Rapidly advancing AI technology presents a quandary for Meta and its peer companies, said digital media strategists in both parties. Yet they disagreed about whether carve-outs for politicians should extend to synthetic media.
Julian Mulvey, who produced ads for Sen. Bernie Sanders (I-Vt.) in 2016 and for the Biden campaign and the Democratic National Committee in 2020, said the power of the technology makes AI-generated content a different kind of political speech, one still worth protecting under the First Amendment, but with protections for users and voters, too.
“A warning label would be appropriate as we venture into this new territory,” Mulvey said.
Eric Wilson, the director of the Center for Campaign Innovation, a conservative nonprofit, said the onus is on voters and campaigns, not private companies, to decide what’s appropriate. “We have work to do to make sure voters are sophisticated enough, and I think campaigns ultimately have a moral obligation to be honest and truthful to voters,” he said. “But it’s not on the platforms to enforce that.”
Trump’s move to share a deepfake of himself added to the flood of AI-generated content surrounding his indictment in New York — perhaps the first major political news event in which sophisticated, if not always entirely convincing, synthetic media flowed freely across the internet.
Fake images of Trump’s being arrested were viewed millions of times on Twitter. One of the former president’s adult sons, Eric Trump, joined in, sharing a fake image of his father marching with followers on the streets of New York. Further afield, doctored videos of Alvin Bragg, the Manhattan district attorney who brought the charges against Trump, spread from TikTok to Twitter, gaining tens of thousands of views in the process.
The newfound prevalence of the technology has also touched off discussion about whether it can or should be used to enhance standard-issue campaign advertising. Wilson pointed to two examples from abroad — a Swedish outreach strategy involving personalized video greetings and an Indian campaign making it appear as though a politician were speaking in different dialects.
In the United States, ad makers are still familiarizing themselves with the technology and weighing what’s possible, both ethically and legally.
Neil Goodman, a Democratic digital strategist, said he used DALL-E, which was developed by OpenAI, the creator of the AI language model ChatGPT, to make an image of a goose typing on a computer for a fundraising appeal last year on behalf of a candidate in California’s San Mateo County. What to do about intrusive gaggles of geese in Foster City, Calif., had become a point of contention on the campaign trail, said Goodman, who noted that the text of the email was written by humans.
“But a goose writing an email is not something we can easily photoshop,” Goodman said. “In this case, the email overperformed expectations.”
Goodman’s firm also worked in the recent race for the Wisconsin Supreme Court on behalf of the successful candidate backed by Democrats. In that instance, a judicial code of conduct and other rules constrained their use of AI, he said, showing how certain situations require “human oversight at every stage of the process.”
Mulvey, the Democratic media consultant, said he has experimented with Midjourney — one of the main AI image generators, along with DALL-E — and has typed prompts into the ChatGPT, including ones asking for a Sanders-style script about billionaires.
“I’m particularly blown away by Midjourney in terms of the images you can create,” Mulvey said. “I typed in, ‘Construction worker wearing hard hat reviewing tablet device next to clean water field.’ There’s a potential here of using that — where you can specify a shot and a look and a style rather than having to draw on stock images.”
But campaigns will first have to decide what to disclose about the tools behind their advertising, Mulvey said, and what kind of criticism might ensue from using synthetic media. Already, campaigns hew to drastically different standards for fact-checking claims and other content in their messaging, he added.
Nick Everhart, founder and president of the Republican ad firm Content Creative Media, said the copyright concerns alone present a barrier. A class-action lawsuit filed this year accuses some of the major AI image generators of unlawfully scraping artist work from the Web.
“It’s a treacherous road not worth taking at the outset,” Everhart said.