The upcoming 2024 election in the United States is drawing attention to the rise of political deepfakes, as advancements in artificial intelligence (AI) tools make it increasingly difficult for voters to distinguish between real and fake information. Mark Warner, the chair of the Senate Intelligence Committee, has expressed concern that the US is less prepared for election fraud in 2024 compared to the previous election in 2020, largely due to the surge in AI-generated deepfakes. According to SumSub, a service that verifies identities, there has been a staggering 1,740% increase in deepfakes in North America, with a tenfold increase globally in 2023.
In January, citizens in New Hampshire reported receiving robocalls with a voice that sounded like US President Joe Biden, urging them not to vote in the primary. This incident led to a ban on AI-generated voices used in automated phone scams, which are now illegal under US telemarketing laws. However, scammers always find ways to circumvent regulations. As the US prepares for Super Tuesday on March 5, the day when the most US states hold primary elections and caucuses, concerns about false AI-generated information and fakes are on the rise.
To shed light on how voters can protect themselves against deepfakes and deepfake identity fraud, Cointelegraph interviewed Pavel Goldman Kalaydin, the head of AI and machine learning at SumSub. Kalaydin emphasized the need for vigilance and caution when encountering video or audio content, as deepfakes can be generated by both tech-savvy teams using advanced technology and hardware, as well as lower-level fraudsters using readily available tools on consumer computers. While there are currently telltale signs to detect deepfakes, Kalaydin warned that the technology is advancing rapidly, and soon it may be impossible for the human eye to detect deepfakes without specialized detection technologies.
The real problem lies in the generation and distribution of deepfakes. The accessibility of AI technology has led to an increase in fake content, and the lack of clear regulations and policies has made it easier to spread misinformation online. Kalaydin cautioned that this leaves voters misinformed and at risk of making poorly informed decisions. He proposed mandatory checks for AI or deepfake content on social media platforms, as well as user verification systems where verified users would be responsible for the authenticity of visual content, while non-verified users would be marked to urge caution in trusting their content.
Governments worldwide are starting to take action in response to this challenging climate. India has issued an advisory to local tech companies, requiring approval before releasing new AI tools that could be deemed “unreliable” ahead of the 2024 elections. In Europe, the European Commission has established AI misinformation guidelines for platforms operating in the region, and Meta (the parent company of Facebook and Instagram) has introduced its own strategy to combat the misuse of generative AI in content on its platforms.
Overall, the proliferation of deepfakes poses a significant threat to the integrity of elections, requiring both technological advancements and regulatory measures to address this issue and protect voters from misinformation.