Over the past few months, several employees have left OpenAI, citing concerns about the company’s commitment to safety. These concerns remained largely unspoken until Leopold Aschenbrenner, a researcher fired from OpenAI in April, published an extensive 165-page essay detailing his thoughts on the future of AI.
Aschenbrenner had worked on OpenAI’s Superalignment team, which was tasked with mitigating AI risks. He claims he was fired for leaking information about the company’s readiness for artificial general intelligence (AGI).
Virtually nobody is pricing in what’s coming in AI.
I wrote an essay series on the AGI strategic picture: from the trendlines in deep learning and counting the OOMs, to the international situation and The Project.
SITUATIONAL AWARENESS: The Decade Ahead pic.twitter.com/8NWDkTprj5
— Leopold Aschenbrenner (@leopoldasch) June 4, 2024
Aschenbrenner’s Concerns
Aschenbrenner asserts that the information he shared was “totally normal” and suggests that OpenAI might have been looking for a reason to fire him.
He was one of several employees who refused to sign a letter calling for CEO Sam Altman‘s return after the board briefly ousted Altman last year.
Aschenbrenner’s essay does not include sensitive details about OpenAI but is based on “publicly available information, my own ideas, general field knowledge, or SF gossip.”
Summary by GPT-4
Business Insider uploaded Aschenbrenner’s essay to OpenAI’s GPT-4 model and asked it to summarize the work. The following is a detailed summary of Aschenbrenner’s key points and predictions about the future of AI.
Rapid Progress in AI
Aschenbrenner argues that AI development is accelerating at an unprecedented rate. He predicts that by 2027, AI models could reach the capabilities of human AI researchers and engineers. This could potentially lead to an intelligence explosion where AI surpasses human intelligence.
Economic and Security Implications
The essay highlights the immense economic and security implications of AI advancements. Aschenbrenner points out that trillions of dollars are being invested in developing the infrastructure needed to support these AI systems, such as GPUs, data centers, and power generation.
He emphasizes the critical need to secure these technologies to prevent misuse, particularly by state actors like the Chinese Communist Party (CCP).
Technical and Ethical Challenges
The essay discusses the significant challenges in controlling AI systems that are smarter than humans, referring to this as the “superalignment” problem. Managing this will be crucial to preventing catastrophic outcomes.
Predictions and Societal Impact
Aschenbrenner suggests that few people truly understand the scale of change that AI is about to bring. He discusses the potential for AI to reshape industries, enhance national security, and pose new ethical and governance challenges.
Detailed Predictions
AGI by 2027
Aschenbrenner predicts that artificial general intelligence (AGI) will be plausible by 2027, highlighting the rapid progress from GPT-2 to GPT-4, which saw AI models advance from preschool-level to intelligent high- school abilities in just four years.
He expects a similar leap in the next few years, based on consistent improvements in computing power and algorithmic efficiency.
Superintelligence Following AGI
Post-AGI, Aschenbrenner anticipates an “intelligence explosion” where AI rapidly advances from human-level to superhuman capabilities. This transition is expected to be fueled by AI’s ability to automate and accelerate its own research and development.
Trillion-Dollar AI Clusters
Economically, Aschenbrenner suggests that the AI sector will see an increase in investment into trillion-dollar compute clusters as corporations and governments prepare for the implications of AGI and superintelligence.
National and Global Security Dynamics
Aschenbrenner predicts intense national security measures being enacted to manage and control AI developments. The competition, particularly with the Chinese government, could intensify, possibly leading to an “all-out war” if not managed properly.
Superalignment Challenges
One of the most critical predictions is the struggle with “superalignment,” the challenge of keeping superintelligent AI aligned with human values and interests. This problem is anticipated to be one of the central hurdles as AI reaches and surpasses human intelligence levels.
Societal and Economic Transformations
Aschenbrenner expects AI to have a profound impact on society and the economy, potentially leading to a restructuring of industries and the job market due to AI’s capability to perform tasks currently managed by humans.
US Government Involvement
He predicts that the US government will become significantly involved in AI development by around 2027–2028 through a dedicated AGI project, likely due to the strategic importance of AI technology.
Technological Mobilization
Aschenbrenner anticipates a mobilization of technological and industrial resources similar to historical wartime efforts, focusing on AI and its supporting infrastructure as a priority for national policy.
Aschenbrenner’s essay provides a comprehensive look at the potential future of AI, highlighting both the rapid advancements and the significant challenges that lie ahead. His predictions underscore the need for vigilance, ethical considerations, and robust security measures as we navigate the AI revolution.