top of page
Staff

Strategists Convene to Tackle Anticipated Surge of AI-Driven Fake Content in 2024 Elections

On Wednesday, a unique meeting was held over Zoom, involving a group of Democratic strategists aiming to tackle the anticipated surge of AI-driven fake content that is predicted to swamp TV channels and mailboxes in 2024. The meeting, orchestrated by the progressive group Arena, included more than 70 officials discussing the potential for generative AI to rapidly disseminate misinformation and disinformation on a scale never before seen in election campaigns.


Pat Dennis, president of the Democratic-led opposition research group American Bridge 21st Century, expressed his concerns over the impending situation, emphasizing the ease with which unscrupulous players could use AI to spread disinformation at an unprecedented rate. The attendees, who weren’t technophobes by any stretch, stressed the importance of not only equipping campaign staff with the knowledge to handle this technology, but also educating voters on how to discern AI-powered misinformation and disinformation.


Dennis further stated, "Sources that provide accurate information about the world will shift from being a luxury to a dire necessity. I hope we are preparing for this on multiple fronts, and I hope voters are too."


The American Association of Political Consultants' board of directors unanimously condemned the utilization of deep fakes in political advertising earlier this year. Becki Donatelli, a Republican digital consultant and AAPC president, termed the use of deep fake generative AI content as a severe threat to democracy.


Detecting AI-generated images or videos can be a challenging task for the untrained observer. For example, the Ron DeSantis campaign recently posted a Twitter video featuring possibly doctored images of former President Donald Trump embracing Anthony Fauci. Presently, no federal mandate requires the inclusion of a disclaimer in campaign ads when AI is employed to create images, although such proposals are currently being considered by Congress. In a recent development, Washington state passed legislation requiring disclosure when AI is utilized in campaign ads.


During last month's CampaignTech East, a gathering of digital campaign specialists, Federal Election Commission representatives shared their views on AI regulation in campaigns. Democratic Commissioner Shana Broussard proposed adapting existing FEC political ad disclosure rules to cover AI, while Republican Commissioner Trey Trainor advocated for minimal regulation.


Despite this, the Democratic strategists at the Arena meeting remained doubtful about the implementation of industry-wide regulation before the 2024 elections. Betsy Hoover, co-founder of the progressive political tech incubator Higher Ground Labs, proposed it as a "strategic move" for Democratic campaigns to pioneer the establishment of guidelines for AI use in voter outreach.


Hoover acknowledged that AI could provide practical applications such as automating tedious campaign tasks and assisting smaller campaigns in areas like graphic design and data analysis. Dennis, however, warned that creating fabricated footage using AI could potentially invite lawsuits. Despite these technological advancements, the potential for misinformation and disinformation only highlights the importance of human staff members, Hoover said, adding that campaigns should be ready to operate in a more volatile media environment.


"Increasingly, people are going to seek out reliable sources," Hoover said. "Just like we realized the importance of influencer messaging and relational organizing in 2016 and 2020, these aspects will become even more crucial in this election cycle."

bottom of page