Meta, the parent company of Facebook and Instagram, has revealed its strategy to address the misuse of generative artificial intelligence (AI) in order to safeguard the integrity of the electoral process on its platforms ahead of the 2024 European Parliament elections in June.
In a blog post on February 25th, Marco Pancini, Meta’s head of EU Affairs, stated that the principles behind the platform’s “Community Standards” and “Ad Standards” will be extended to AI-generated content. Pancini emphasized that AI-generated content will also be subject to review and rating by independent fact-checking partners. One of the ratings will indicate if the content has been “altered,” which means it has been “faked, manipulated, or transformed audio, video, or photos.”
The platform’s existing policies already require photorealistic images created using Meta’s AI tools to be labeled as such. However, this recent announcement reveals that Meta is developing new features to label AI-generated content produced by other tools, such as Google, OpenAI, Microsoft, Adobe, Midjourney, and Shutterstock, that users share on any of its platforms.
Furthermore, Meta plans to introduce a feature that allows users to disclose if they have shared an AI-generated video or audio, ensuring that it is flagged and labeled accordingly. Failure to disclose this information may result in penalties.
In addition, Meta stated that advertisers running political, social, or election-related ads that have been altered or created using AI must disclose its usage. The blog post mentioned that between July and December 2023, Meta removed 430,000 ads across the European Union for lacking a disclaimer.
This issue has gained significant relevance as major global elections are scheduled for 2024. Both Meta and Google have previously addressed rules concerning AI-generated political advertising on their platforms. On December 19, 2023, Google announced that it would restrict responses to election queries on its AI chatbot Gemini, formerly known as Bard, and its generative search feature in the lead-up to the 2024 US presidential election.
OpenAI, the developer of the AI chatbot ChatGPT, has also taken steps to alleviate concerns about AI interference in global elections by establishing internal standards to monitor activity on its platforms.
On February 17, 20 companies, including Microsoft, Google, Anthropic, Meta, OpenAI, Stability AI, and X, signed a pledge to combat AI election interference, recognizing the potential dangers if left uncontrolled.
Governments worldwide have also implemented measures to address AI misuse ahead of local elections. The European Commission launched a public consultation on proposed guidelines for election security to mitigate democratic threats posed by generative AI and deepfakes.
In the United States, AI-generated voices in automated phone scams were banned and made illegal after a deepfake of President Joe Biden circulated in scam robocalls, misleading the public.
Magazine: Google plans to rectify diversity issues with Gemini AI, while ChatGPT experiences unusual behavior: AI Eye