The Indian government has released an advisory stating that tech companies developing new artificial intelligence (AI) tools must obtain approval from the government before releasing them to the public.
According to the advisory issued by the Indian IT ministry on March 1, approval must be granted before the release of AI tools that are considered “unreliable” or still in the trial phase. These tools should also be labeled to indicate that they may provide inaccurate answers to queries. The ministry further emphasized the need for platforms to ensure that their tools do not pose a threat to the integrity of the electoral process, as general elections are expected to take place this summer.
This advisory comes shortly after one of India’s top ministers criticized Google and its AI tool, Gemini, for its “inaccurate” or biased responses. Gemini had reportedly characterized Indian Prime Minister Narendra Modi as a “fascist.” Google apologized for the shortcomings of Gemini and acknowledged that it may not always be reliable, especially when it comes to current social topics.
Rajeev Chandrasekhar, India’s deputy IT minister, highlighted the legal obligations of platforms in ensuring safety and trust. He stated that being “sorry” and acknowledging unreliability does not exempt platforms from the law.
In November, the Indian government announced its plans to introduce new regulations to combat the spread of AI-generated deepfakes ahead of the upcoming elections. This approach mirrors the actions taken by regulators in the United States.
However, the tech community in India has expressed concerns about the latest AI advisory, stating that it would be detrimental for India to regulate itself out of its leadership position in the tech space.
In response to this criticism, Chandrasekhar addressed the “noise and confusion” in a follow-up post on X, stating that platforms enabling or directly producing unlawful content should face legal consequences. He clarified that the advisory was meant to inform those deploying lab-level or under-tested AI platforms on the public internet about their obligations and the potential consequences according to Indian laws, as well as how to protect themselves and users.
On Feb. 8, Microsoft partnered with Indian AI startup Sarvam to bring an Indic-voice large language model (LLM) to its Azure AI infrastructure, aiming to reach more users in the Indian subcontinent.