Twenty technology companies involved in the development of artificial intelligence (AI) have made a joint announcement on Friday, February 16th, expressing their commitment to prevent their software from influencing elections, including those in the United States.
The agreement acknowledges the significant risk posed by AI products, especially in a year when approximately four billion people worldwide are expected to participate in elections. The document raises concerns about the potential for deceptive AI in election-related content, which could mislead the public and endanger the integrity of electoral processes.
Furthermore, the agreement recognizes that global lawmakers have been slow to respond to the rapid advancements in generative AI, prompting the tech industry to explore self-regulation. Brad Smith, the vice chair and president of Microsoft, expressed his support for this initiative in a statement.
The pledge has been signed by 20 tech companies, including Microsoft, Google, Adobe, Amazon, Anthropic, Arm, ElevenLabs, IBM, Inflection AI, LinkedIn, McAfee, Meta, Nota, OpenAI, Snap, Stability AI, TikTok, TrendMicro, Truepic, and X.
However, it is important to note that the agreement is voluntary and does not involve a complete ban on AI content in elections. Instead, the 1,500-word document outlines eight steps that the signatory companies commit to taking by 2024. These steps include the development of tools to distinguish AI-generated images from authentic content and a commitment to transparency with the public regarding significant developments.
Despite this commitment, Free Press, an advocacy group for an open internet, has criticized the agreement as an empty promise. The group argues that tech companies have failed to fulfill previous pledges regarding election integrity after the 2020 election. They advocate for increased oversight by human reviewers.
In response to the announcement, U.S. Representative Yvette Clarke expressed her support for the tech accord and called for Congress to build upon it. Clarke has sponsored legislation aimed at regulating deepfakes and AI-generated content in political advertisements.
On January 31st, the Federal Communications Commission voted to ban AI-generated robocalls that utilize AI-generated voices. This decision came after a fake robocall, claiming to be from President Joe Biden, caused widespread alarm ahead of the New Hampshire primary in January. The incident highlighted concerns about the potential use of counterfeit voices, images, and videos in politics.
In other news, the magazine “Crypto+AI token picks” discusses various topics, including the timeline for the development of artificial general intelligence (AGI) and Galaxy AI’s plans to integrate AI into 100 million phones through their partnership.