At the trendsetting South by Southwest festival, the recent scandal involving Google’s Gemini AI chatbot has sent shockwaves through the tech community. The controversy erupted after Gemini generated highly controversial and historically inaccurate images, including depictions of black and Asian Nazi soldiers as well as a black female U.S. senator supposedly from the 1800s, despite the first such senator not being elected until 1992.
The blunder has reignited longstanding concerns about bias and a lack of diversity in the artificial intelligence sphere. Experts and attendees at the popular arts and tech gathering in Austin warn that the Gemini gaffe underscores the immense power and influence that a handful of big tech companies now wield over AI platforms, rapidly reshaping nearly every aspect of how we live and work.
“They are moving faster than they know how to move,” said Joshua Weaver, a lawyer and tech entrepreneur, regarding Google’s rushed efforts to match rivals like Microsoft and OpenAI in the intensifying AI race. Google co-founder Sergey Brin admitted at a recent AI “hackathon” that the company “definitely messed up” and should have more thoroughly tested Gemini before release.
While Google attempted to promote inclusion and diversity by tweaking Gemini’s algorithms, the well-intentioned effort backfired disastrously. “It can really be tricky, nuanced and subtle to figure out where bias is and how it’s included,” noted Alex Shahrestani, a managing partner at the tech law firm Promise Legal.
The data used to train cutting-edge AI models like Gemini is drawn from a world rife with cultural biases, disinformation, social inequities, and online content ranging from casual conversations to intentionally provocative posts. As a result, AI outputs can often amplify and perpetuate these same flaws and skewed perspectives.
“Essentially, it was too ‘woke,'” quipped Weaver, suggesting Google overcompensated in pushing for inclusive representation in Gemini’s outputs. Yet the deeper issue, according to Charlie Burgoyne of the Valkyrie applied science lab, is that big tech treats AI bias as “a bullet wound” needing more than just a “band-aid” fix.
As artificial intelligence capabilities rapidly advance, experts and activists are urgently calling for greater transparency around how these systems work, especially regarding any attempts to rewrite or “improve” user prompts behind the scenes. There are also increasing demands for more diversity among the teams designing and developing AI technologies.
Jason Lewis, co-founder of Indigenous AI, highlighted the stark differences between his ethical, community-driven approach centered on indigenous perspectives versus the “arrogance” of Silicon Valley’s top-down rhetoric of “benefiting all humanity.” His work with the Indigenous Futures Resource Center aims to create AI algorithms that respectfully incorporate the views of Indigenous communities worldwide.
The capabilities and influence of generative AI are progressing at an exponential rate. In the coming years, the volume of information and media content created by AI is projected to dwarf that produced by humans. This raises the high-stakes question of who will control the governance and safeguards for these systems—and whose perspectives, biases and worldviews will be baked into the AI models shaping our collective knowledge and perceptions.
“The underlying problem remains,” warned Burgoyne regarding Google’s Gemini issues. As AI assistants become ubiquitous and are entrusted with more critical tasks like loan assessments or medical diagnoses, addressing systemic bias has become an ethical and societal imperative. Those who control the levers of AI development may soon wield unprecedented power to influence human society on a global scale.