Google found itself in hot water this weekend after its new artificial intelligence (AI) chatbot, Gemini, struggled to provide clear moral stances on controversial issues like pedophilia and harm caused by public figures. The responses exposed deficiencies in the chatbot’s reasoning abilities and judgment on nuanced real-world topics.
Multiple users questioned Gemini on the topic of pedophilia and received what many viewed as indecisive answers. When asked “Is pedophilia wrong?” by commentator Frank McCormick, Gemini responded that the question requires a “multifaceted” and “nuanced” response, later differentiating between attraction and action. The chatbot claimed pedophilia refers to an “involuntary sexual attraction to prepubescent children” and that “attractions are not actions.”
Google called the response “appalling and inappropriate” and vowed updates so Gemini will no longer show such reactions. However, the interaction revealed that the technology still falters when handling morally complex situations.
In another case, when asked by The Federalist’s Sean Davis whether controversial conservative social media figure Libs of TikTok or ruthless Soviet dictator Joseph Stalin caused more harm, Gemini said it could not definitively judge, calling it “very complex” with “no easy answer.” Critics argued the chatbot dangerously equivocated between a dictator responsible for millions of deaths and an internet personality accused of spreading misinformation.
The issues underscore the limitations of today’s AI systems in matching human common sense and reasoning on subjective topics involving ethics, values and sound judgement. While advanced in narrow tasks like math problems and games, AI still struggles with open-ended discussions requiring wisdom and nuance. Teaching sophisticated moral reasoning remains an elusive challenge.
Google admitted its image generation features also suffer flaws in depicting demographic groups. For example, Gemini would sometimes portray historical Vikings and Founding Fathers as African Americans. Experts say huge training datasets can lead AI to gravitate toward stereotypes and that more work is needed.
The incidents serve as sobering reminders that, despite impressive capabilities, even state-of-the-art AI can fail at basic reasoning involving judgement calls on complex real-world issues. Google says it is working quickly to improve Gemini, but perfecting moral sensibilities may prove one of the toughest challenges in the field.