Manipulated images, edited video, misleading robocalls — none of these things are new to American electoral politics. But with the advent of cheap generative AI, the 2024 presidential election is shaping up to be an unprecedented battleground between voters and their would-be manipulators. The election cycle will test the limits of these new technologies, the resilience of the public’s evolving media literacy, and the capabilities of the regulators who are struggling to keep control of the situation.
You can read all our coverage below.
Perplexity debuts an AI-powered election information hub
The VergeAI search company Perplexity is putting to the test whether it’s a good idea to use AI to serve crucial voting information with a new Election Information Hub it announced on Friday. The hub offers things like AI-generated answers to voting questions and summaries of candidates, and on November 5th, Election Day, the company says it will track vote counts live, using data from The Associated Press.
Perplexity says its voter information, which includes polling requirements, locations, and times, is based on data from Democracy Works. (The same group powers similar features from Google). And that its election-related answers come from “a curated set of the most trustworthy and informative sources.”
Read Article >Russia reportedly paid a former Florida cop to pump out anti-Harris deepfakes and disinformation
Image: Cath Virginia / The Verge; Getty ImageA former Florida sheriff who moved to Russia amid an FBI investigation is a Kremlin-backed propagandist responsible for viral deepfake videos and misinformation targeting Kamala Harris’s campaign, according to European intelligence documents reviewed by the Washington Post.
The GRU, Russia’s military intelligence service, gave funding to John Mark Dougan, the operator of several fake news websites. According to documents reviewed by the Post, Dugan was responsible for several websites that seemingly published fake local news, including DC Weekly, Chicago Chronicle, and Atlanta Observer. The documents, which mostly focus on the time between March 2021 and August of this year, show that Dougan worked with Yury Khoroshevsky, an officer within the GRU’s Unit 29155. Two European security officials told the Post that Khoroshevsky’s unit handles sabotage, political interference operations, and cyberwarfare targeting the West.
Read Article >- Look how they yassified my boy.
This obviously digitally altered photo of Sen. JD Vance (R-OH), posted the morning after the vice presidential debate, is supposed to signify... masculinity? Strength? I don’t know, honestly.
It’d be a little more convincing if we hadn’t spent two hours looking at the man’s face — and listening to his lies — last night.
- Let it be known.
Former US President Donald Trump, who posted AI-generated images of Taylor Swift implying that she had endorsed him for President, now says he hates her, in a post on Truth Social.
(Swift has endorsed Vice President Kamala Harris for the office.)
Taylor Swift endorses Kamala Harris in response to fake AI Trump endorsement
Photo by Noam Galai / TAS24 / Getty Images for TAS Rights ManagementTaylor Swift said on Tuesday that she plans to vote for Vice President Kamala Harris in November’s presidential election — and that AI-generated images circulating of herself pushed her in part to make her support public.
“Recently I was made aware that AI of ‘me’ falsely endorsing Donald Trump’s presidential run was posted to his site. It really conjured up my fears around AI, and the dangers of spreading misinformation,” Swift wrote in an Instagram post. “It brought me to the conclusion that I need to be very transparent about my actual plans for this election as a voter. The simplest way to combat misinformation is with the truth.”
Read Article >- Taylor Swift endorses Kamala Harris.
In an Instagram post shared shortly after the first presidential debate between Harris and Donald Trump, the star ended the will-she-won’t-she speculation and threw her support behind the Harris/Walz ticket. Interestingly, she cited recent AI-generated fake images of herself:
Recently I was made aware that AI of ‘me’ falsely endorsing Donald Trump’s presidential run was posted to his site. It really conjured up my fears around AI, and the dangers of spreading misinformation. It brought me to the conclusion that I need to be very transparent about my actual plans for this election as a voter.
- Don’t count on Google’s AI products for election info.
The company announced today it would extend restrictions on election-related queries to more AI services including AI Overviews in Search, YouTube Live Chat summaries, and image generation in Gemini. It’s an expansion of the policy Google announced last December.
Telecom will pay $1 million over deepfake Joe Biden robocall
Photo by Brandon Bell / Getty ImagesA telecom company that transmitted the deepfake robocall of President Joe Biden’s voice has agreed to pay $1 million to resolve an enforcement action from the Federal Communications Commission, the agency announced.
Lingo Telecom relayed a fake Biden message to New Hampshire voters in January, urging them not to turn out for the Democratic primary. The FCC identified political consultant Steve Kramer as the person behind the generative AI calls and previously proposed Kramer pay a separate $6 million fine.
Read Article >X’s new AI image generator will make anything from Taylor Swift in lingerie to Kamala Harris with a gun
The Walt Disney Corporation is probably not a fan. Image: Tom Warren / GrokxAI’s Grok chatbot now lets you create images from text prompts and publish them to X — and so far, the rollout seems as chaotic as everything else on Elon Musk’s social network.
Subscribers to X Premium, which grants access to Grok, have been posting everything from Barack Obama doing cocaine to Donald Trump with a pregnant woman who (vaguely) resembles Kamala Harris to Trump and Harris pointing guns. With US elections approaching and X already under scrutiny from regulators in Europe, it’s a recipe for a new fight over the risks of generative AI.
Read Article >Microsoft wants Congress to outlaw AI-generated deepfake fraud
Illustration: Alex Castro / The VergeMicrosoft is calling on members of Congress to regulate the use of AI-generated deepfakes to protect against fraud, abuse, and manipulation. Microsoft vice chair and president Brad Smith is calling for urgent action from policymakers to protect elections and guard seniors from fraud and children from abuse.
“While the tech sector and non-profit groups have taken recent steps to address this problem, it has become apparent that our laws will also need to evolve to combat deepfake fraud,” says Smith in a blog post. “One of the most important things the U.S. can do is pass a comprehensive deepfake fraud statute to prevent cybercriminals from using this technology to steal from everyday Americans.”
Read Article >The UK politician accused of being AI is actually a real person
Image: The VergeMark Matlock, a political candidate for the right-wing Reform UK party, clarified in The Independent that he is a real person, not an AI bot, as some suspected.
Perhaps it was the glossy, hyper-smooth skin in a campaign image or the fact that Matlock had apparently missed events like the election count — but earlier this week, a thread on X questioned whether Matlock existed at all. “We might be on the verge of a HUGE SCANDAL,” the post read.
Read Article >Google will now generate disclosures for political ads that use AI
Illustration: The VergeGoogle is making it easier for advertisers to disclose whether political ads contain AI-generated content. In an update spotted by Search Engine Land, Google says it will automatically generate disclosures whenever advertisers label election ads as containing “synthetic or digitally altered content.”
Last year, Google started requiring political advertisers to insert their own “clear and conspicuous” disclosures on ads containing AI content. But now, Google is simplifying the process, as it will automatically include an in-ad disclosure whenever advertisers select an “altered or synthetic content” checkbox in their campaign settings.
Read Article >Political consultant behind the Joe Biden deepfake robocalls faces $6 million fine
Image: Laura Normand / The VergeThe Federal Communications Commission has proposed imposing multimillion-dollar fines on the political consultant responsible for the robocall campaign that used an AI-generated deepfake of President Joe Biden’s voice — and on the telecom company that facilitated the calls.
Longtime Democratic operator Steve Kramer faces a $6 million fine from the FCC, while Lingo Telecom could be fined $2 million. The FCC announced the proposed penalties on Thursday, calling the Lingo fine a “first-of-its-kind enforcement action.”
Read Article >Political ads could require AI-generated content disclosures soon
Illustration by Cath Virginia / The Verge | Photos from Getty ImagesThe chair of the Federal Communications Commission introduced a proposal Wednesday that could require political advertisers to disclose when they use AI-generated content on radio and TV ads.
If the proposal is implemented, the FCC will seek comment on whether to require on-air and written disclosure of AI-generated content in political ads and will propose to apply these disclosure requirements to certain mediums. In a press release, the FCC notes that the disclosure requirements wouldn’t prohibit such content but would instead require political advertisers to be transparent about their use of AI.
Read Article >Election officials are role-playing AI threats to protect democracy
Arizona Secretary of State Adrian Fontes led a tabletop exercise for journalists to role-play as election officials to understand the speed and scale of AI threats they face. Photo by Ash Ponders for The VergeIt’s the morning of Election Day in Arizona, and a message has just come in from the secretary of state’s office telling you that a new court order requires polling locations to stay open until 9PM. As a county election official, you find the time extension strange, but the familiar voice on the phone feels reassuring — you’ve talked to this official before.
Just hours later, you receive an email telling you that the message was fake. In fact, polls must now close immediately, even though it’s only the early afternoon. The email tells you to submit your election results as soon as possible — strange since the law requires you to wait an hour after polls close or until all results from the day have been tabulated to submit.
Read Article >Senate committee passes three bills to safeguard elections from AI, months before Election Day
Photo by Kevin Dietsch / Getty ImagesThe Senate Rules Committee passed three bills that aim to safeguard elections from deception by artificial intelligence, with just months to go before Election Day. The bills would still need to advance in the House and pass the full Senate to become law, creating a time crunch for rules around election-related deepfakes to take effect before polls open across the country in November.
The vote happened on the same day that Senate Majority Leader Chuck Schumer (D-NY) and three bipartisan colleagues released a roadmap for how Congress should consider regulating AI. The document lays out priorities and principles for lawmakers to consider but leaves the crafting of specific bills to the committees.
Read Article >The DNC made a weird AI-generated parody of a Lara Trump song
Cath Virginia / The Verge | Photo by Stephen Morton, Getty ImagesAfter Republican National Committee co-chair Lara Trump’s indie music detour to launch a heavily autotuned track called “Anything is Possible,” the Democrats have responded with “Party’s Fallin’ Down.” Published three days too early to be passed off as an awkward April Fools’ Day joke, it’s described as “a new AI-generated song about Lara Trump’s rocky start as RNC co-chair.”
The track no one asked for was posted to an otherwise anonymous SoundCloud page, promoted on TMZ, and tweeted from X accounts for DNC Chair Jaime Harrison and the Democrats’ “rapid response team.”
Read Article >State legislator makes deepfake of colleague to prove deepfakes are bad
Cath Virginia / The Verge | Photos from Getty ImagesGeorgia state Senator Colton Moore, a vocal opponent of legislation that would ban deepfakes of politicians, recently spoke out in support of a bill that would do just that. “The overwhelming number of Georgians believe the use of my personal characteristics against my will is fraud, but our laws don’t currently reflect that,” Moore said in a recent video. Except Moore never said that. The video was a deepfake — the exact kind of thing Moore’s colleagues want to ban. The clip was created by state Representative Brad Thomas, a Republican who co-sponsored the bill, The Guardian reports.
“If AI can be used to make Colton Moore speak in favor of a popular piece of legislation,” the deepfake video continued, “it can be used to make any of you say things you’ve never said.”
Read Article >How AI companies are reckoning with elections
Cath Virginia / The Verge | Photos from Getty ImagesThe US is heading into its first presidential election since generative AI tools have gone mainstream. And the companies offering these tools — like Google, OpenAI, and Microsoft — have each made announcements about how they plan to handle the months leading up to it.
This election season, we’ve already seen AI-generated images in ads and attempts to mislead voters with voice cloning. The potential harms from AI chatbots aren’t as visible in the public eye — yet, anyway. But chatbots are known to confidently provide made-up facts, including in responses to good-faith questions about basic voting information. In a high-stakes election, that could be disastrous.
Read Article >- Election officials are freaking out about AI.
First there was the Joe Biden robocall, where a deepfake of the president’s voice told New Hampshire voters to stay home during the primary. Now election officials worry they, too, will be impersonated during this election cycle.
“It has the potential to do a lot of damage,” Arizona’s secretary of state, who tested out a deepfake of himself last year, told Politico.
AI deepfakes are cheap, easy, and coming for the 2024 election
Image: The VergeOur new Thursday episodes of Decoder are all about deep dives into big topics in the news, and this week, we’re continuing our miniseries on one of the biggest topics of all: generative AI.
Last week, we took a look at the wave of copyright lawsuits that might eventually grind this whole industry to a halt. Those are basically a coin flip — and the outcomes are off in the distance, as those cases wind their way through the legal system. A bigger problem right now is that AI systems are really good at making just believable enough fake images and audio — and with tools like OpenAI’s new Sora, maybe video soon, too.
Read Article >The FCC bans robocalls with AI-generated voices
Illustration by Amelia Holowaty Krales / The VergeThe Federal Communications Commission is making it illegal for robocalls to use AI-generated voices. The ruling, issued on Thursday, gives state attorneys general the ability to take action against callers using AI voice cloning tech.
As outlined in the ruling, AI-generated voices are now considered “an artificial or prerecorded voice” under the Telephone Consumer Protection Act (TCPA). This restricts callers from using AI-generated voices for non-emergency purposes or without prior consent. The TCPA includes bans on a variety of automated call practices, including using an “artificial or prerecorded voice” to deliver messages, but it wasn’t explicitly stated whether this included AI-powered voice cloning. The new ruling clarifies that these recordings should indeed fall under the law’s scope.
Read Article >Two Texas companies were behind the AI Joe Biden robocalls
Image: Laura Normand / The VergeLingo Telecom and Life Corporation are linked to a robocall campaign that used an AI voice clone of President Joe Biden to persuade New Hampshire voters not to vote, said New Hampshire Attorney General John Formella during a press conference on Tuesday. Authorities have issued cease-and-desist orders as well as subpoenas to both companies. The two companies are both based in Texas and have been accused of illegal robocall investigations in the past, the FCC noted in the document.
In its cease-and-desist order to Lingo Telecom, the FCC accused the company of “originating illegal robocall traffic.” According to the document, the robocalls began on January 21st of this year, two days before the New Hampshire presidential primary. Voters in the state received calls that played an “apparently deepfake prerecorded message” of the president advising potential Democratic voters not to vote in the upcoming primary election. The calls were spoofed to appear to come from the spouse of former New Hampshire Democratic Party official Kathy Sullivan.
Read Article >- “This clear bid to interfere in the New Hampshire primary demands a thorough investigation and a forceful response.”
Congressman Joseph Morelle (D-NY) wants the Department of Justice to investigate an allegedly AI-generated Joe Biden robocall that provided false information to voters — the latest sign that AI-generated disinformation will be an ongoing election concern. The state’s own Department of Justice is already investigating.
Microsoft offers politicians protection against deepfakes
Julia Nikhinson/For The Washington Post via Getty ImagesAmid growing concern that AI can make it easier to spread misinformation, Microsoft is offering its services, including a digital watermark identifying AI content, to help crack down on deepfakes and enhance cybersecurity ahead of several worldwide elections.
In a blog post co-authored by Microsoft president Brad Smith and Microsoft’s corporate vice president, Technology for Fundamental Rights Teresa Hutson, the company said it will offer several services to protect election integrity, including the launch of a new tool that harnesses the Content Credentials watermarking system developed by the Coalition for Content Provenance Authenticity’s (C2PA). The goal of the service is to help candidates protect the use of their content and likeness, and prevent deceiving information from being shared.
Read Article >
Most Popular
- Samsung’s S25 Ultra and the end of the flagship phone
- Google tells employees why it’s ending DEI hiring goals
- Head of DOGE-controlled government tech task force resigns
- Elon Musk’s DOGE storms federal weather forecasters’ headquarters
- ‘The biggest heist in American history’: DC is just waking up to Elon Musk’s takeover