Cybersecurity specialists are concerned that AI-created content could skew our understanding of reality, an especially pressing issue during an election-heavy year.
However, Martin Lee, the technical lead at Cisco's Talos security intelligence and research group, believes that the perceived threat from deepfakes to democracy might be exaggerated.
Lee acknowledged that deepfakes are indeed a formidable technology but argued they are less influential than conventional fake news. He warned that new AI tools could simplify the creation of fraudulent content.
Often, AI-produced material includes discernible signs indicating it wasn't made by a human. Visual anomalies, such as extra hands or limbs blending into backgrounds, are common in AI-generated images.
Distinguishing synthetic voice audio from genuine human voice clips can be more challenging, yet the effectiveness of AI remains tied to the quality of its data, according to experts.
Lee pointed out that although AI-generated content can often be identified when analyzed objectively, it doesn’t necessarily hinder perpetrators.
Experts have previously expressed that AI-driven disinformation poses a significant threat in upcoming global elections.
Matt Calkins, CEO of Appian, a company that provides tools for easier app development, remarked on the "limited usefulness" of today's AI technologies, which he finds lackluster. "Once it understands you, it transitions from astonishing to merely useful—it just can't cross that final threshold yet," he commented.
Calkins anticipates that as we grow more comfortable trusting AI with personal data, it could become an incredibly powerful, yet potentially harmful, tool for spreading disinformation. He expressed frustration with the slow pace of regulatory efforts in the U.S. and suggested it might take a notably offensive AI output to spur legislative action. "Democracies tend to respond reactively," he noted.
Despite advancements in AI, Lee from Cisco emphasized that there are reliable methods to identify misinformation, whether machine-made or human-crafted.
Lee advises the public to be vigilant and critical when emotional content arises, questioning its plausibility and the credibility of its sources. "If it isn't backed by other reputable media, it's likely a scam or disinformation campaign that should be disregarded or reported," he concluded.