Listen to the Podcast:
Sam Altman, the centre of the AI boom, wants us to know he is a little scared. CEO of OpenAI and creator of ChatGPT Sam Altman gave a long, one-on-one interview to ABC News that aired last week on its evening news show.
In it, he talked about the future of his company’s AI tech in grand terms, saying things like “this will be the best technology humanity has ever made” and stressing that society needs to adapt or be ready for bad things to happen.
He praised how his company is releasing chatbot platforms one by one, saying, “People should be happy that we’re a little bit scared of this.”
“Are you a little bit afraid?” Rebecca Jarvis of ABC News pushed. “You yourself?”
“A little bit, sure,” Sam Altman said, making a key point in his apparent plan to win the American people’s trust. “If I said I wasn’t, you should either not trust me or be very upset that I have this job,” he said.
Altman has been keeping things in check since November, when the ChatGPT platform went viral and started the AI boom sweeping the tech industry. As his company makes better models, he tries to get the word out about this fast-moving technology and becomes the industry’s leading spokesman.
In the interview, the OpenAI CEO Sam Altman said that he sees artificial intelligence as an “amplifier of human will.” He gave examples like fast medical advice, tools for creativity, and a “co-pilot” that can help every professional do their job more efficiently.
Altman says that AI will change how people work, but now that he is in the spotlight, he doesn’t discuss his plan for a universal basic income by 2021. He needs feedback on his chatbots to make them work better, but OpenAI won’t share its model or code because of safety concerns and competition from other companies. He asks the government to get involved and even brings up the idea of an international coalition for AI governance. However, when Jarvis asked him what regulators could do right now, he said the most important thing is just to “get up to speed.”
He says that the technology has a lot of educational potential but also points out that chatbots often say things that aren’t true with confidence. And even though he wants us to stop thinking of AI in terms of sci-fi apocalypses, he says he worries “a lot” about authoritarian governments making the powerful technology.
He is also worried about large-scale campaigns to spread false information and cyberattacks that are meant to hurt. He is very clear and honest about the limits of AI and his company’s chatbot technology. But he doesn’t usually take responsibility for the problems this technology might cause. Instead, he uses variations of the phrase “society needs time to adapt.”
“We will need to find ways to slow down this technology over time,” said the CEO of the world’s leading AI research and engineering firm.