Google announced on Tuesday that it would limit its AI chatbot Gemini from providing responses regarding upcoming global elections, aiming to mitigate the risk of misinformation.
The decision follows growing concerns about the potential misuse of generative AI technology, particularly in the dissemination of fake news and misinformation surrounding elections.
Gemini, an AI chatbot developed by Google’s parent company Alphabet, will now refrain from answering queries related to elections, including prominent events like the upcoming U.S. presidential match-up between Joe Biden and Donald Trump. Instead, users will be directed to utilize Google Search for such inquiries.
The restrictions, initially announced for the United States in December, are now extended globally as countries prepare for various elections in 2024. These elections include significant events in countries like South Africa and India, the latter being the world’s largest democracy.
In India, the government has mandated that tech companies seek approval before releasing AI tools, especially those that may provide unreliable or unverified information.
Google’s decision to limit Gemini’s responses comes after recent scrutiny over inaccuracies in the chatbot’s image-generation feature, leading to its temporary suspension.
Google CEO Sundar Pichai acknowledged the issues, describing Gemini’s responses as biased and unacceptable. The company is actively working to address these concerns and improve the reliability of its AI products.
The move aligns with broader efforts within the tech industry to combat disinformation and abuse of AI technologies, particularly ahead of significant political events like elections.
Meta Platforms, the parent company of Facebook, announced plans to establish a dedicated team to address disinformation and misuse of generative AI tools in preparation for the upcoming European Parliament elections in June.