Top AI companies including Google, Microsoft, OpenAI and Anthropic have jointly established a $10 million fund to support research into safely developing AI systems.
But critics question if the industry group will advocate true oversight.
Big Tech Giants Join Forces to Form AI Safety Forum
The Frontier AI ethics consortium launched in July to supposedly ensure responsible AI progress. Founding members Google, Microsoft, OpenAI and Anthropic now plan to fund research on AI safety.
Each company has invested billions in expanding AI capabilities. Pooling a comparatively small $10 million aims to promote external trust in their commitments to caution.
Appoints Leader to Coordinate AI Scrutiny
The Frontier alliance appointed Chris Meserole as executive director. Meserole formerly led an AI research group focused on minimizing risks.
He will coordinate the fund’s efforts to audit AI systems. But his close industry ties raise doubts about impartiality.
Stated Aim is Red Teaming Latest AI Models
The companies claim the fund will primarily finance techniques to rigorously red team powerful new AI models before deployment.
Red teaming involves uncovering flaws, biases and vulnerabilities through adversarial testing. But details remain vague.
Tiny Compared to Each Member’s AI Spending
While not insignificant, $10 million pales compared to the billions member companies pour into developing AI annually.
This highlights the fund is more about lip service and PR rather than meaningfully guiding AI priorities.
Critics Question True Independence
Critics point out the fund seems geared to protect the industry versus the public interest. Truly independent voices appear excluded.
There are concerns it will largely serve as a promotional vehicle to boost the reputation of members’ AI work.
Dark Web Already Misusing AI for Harm
The need for AI oversight is underscored by criminals misusing it for abuse. A UK watchdog revealed over 20,000 illegal AI-generated images shared on the dark web.
Governments are scrambling to curb harmful AI uses. But voluntary consortiums have limited authority over member activities.
Big Tech Wants “Friendly” Regulation
The giants funding Frontier all publicly endorse AI regulation in principle. But they oppose restrictions that might hamper their AI aspirations.
The alliance allows targeting specific oversight that accommodates unfettered AI progress. But avoids binding external rules.
In summary, while the Frontier AI safety fund seems a positive step, deeper scrutiny suggests it primarily serves its Big Tech backers versus the greater good. True reform requires regulatory action, not voluntary consortiums.