Elon Musk and AI Scientists Seek Freeze on Training Systems Stronger than GPT-4
Listen to the Podcast:
In an open letter, Elon Musk and a group of AI experts and business leaders say that training systems more powerful than GPT-4 should be put on hold for six months because they could pose risks to society and humanity.
The letter was sent out by the non-profit Future of Life Institute and signed by more than 1,000 people, including Elon Musk, Apple co-founder Steve Wozniak, and Stability AI CEO Emad Mostaque. It asked for a halt to the development of advanced AI until shared safety protocols for such designs were created, put into place, and checked by independent experts.
“Powerful AI systems shouldn’t be made until we’re sure their effects will be good and their risks are manageable,” said the letter.
In addition, the letter described the potential risks that human-competitive artificial intelligence systems could pose to society and civilization in the form of economic and political disruptions, and it urged developers to collaborate with policymakers on the establishment of governance and regulatory authorities.
The letter was sent at the same time that the European Union’s police force, Europol, joined a chorus of ethical and legal concerns over advanced artificial intelligence (AI) like ChatGPT on Monday. Europol warned about the potential misuse of the system in phishing attempts, disinformation, and cybercrime.
Since its debut the year before, OpenAI’s ChatGPT, which Microsoft funds, has inspired competitors to introduce comparable products, and businesses to incorporate it or technologies that are analogous into their software and hardware.
To Read Our Exclusive Content, Sign up Now.