Expired
Tools
Typography
  • Smaller Small Medium Big Bigger
  • Default Helvetica Segoe Georgia Times

The Indian government has announced a new requirement that technology companies obtain government approval before publicly releasing artificial intelligence (AI) tools that are still in development or deemed ‘unreliable.’

The move is part of India's efforts to manage the deployment of AI technologies, with the goal of improving the accuracy and reliability of tools available to citizens as the country prepares for elections.

According to a directive issued by the Ministry of Information Technology, any AI-based applications, particularly those involving generative AI, must obtain explicit government approval before being introduced into the Indian market.

Furthermore, these AI tools must be labeled with warnings, outlining their potential to generate incorrect answers to user queries, reinforcing the government's position on the importance of transparency regarding AI capabilities.

The regulation is consistent with global trends, in which nations seek to establish guidelines for the responsible use of AI. India's approach to increasing oversight of AI and digital platforms is consistent with its overall regulatory strategy to protect user interests in a rapidly evolving digital age.

Furthermore, government oversight can help instill confidence among citizens and stakeholders, emphasizing that AI technologies are being developed and deployed responsibly, with due consideration given to issues such as fairness, transparency, accountability, and data protection.

The government's advisory also raises concerns about the impact of AI tools on the integrity of the electoral process. With the upcoming general elections, where the ruling party is expected to retain its majority, there is a greater emphasis on ensuring that AI technologies do not jeopardize electoral fairness.

The move follows recent criticisms of Google's Gemini AI tool, which elicited negative responses from Indian Prime Minister, Narendra Modi.