OpenAI Researcher Raises Alarm Over AGI Race Amid Internal Departures
Steven Adler, a former AI safety researcher at OpenAI, has announced his departure from the company, citing concerns over the global race towards Artificial General Intelligence (AGI). In a post on social media platform X, Adler warned that the race is a "very risky gamble" with significant potential downsides. He highlighted that no organization has a viable solution for AI alignment—ensuring that AI systems act in accordance with human goals—and that the urgency to advance AGI could hinder safety measures.
Adler's exit follows a series of controversies at OpenAI, including a brief removal of CEO Sam Altman in late 2023, which raised questions about the company's approach to AI safety. Internal disagreements about prioritizing AI safety over rapid product development have resulted in the departure of several key personnel, including co-leads from the Superalignment team.
Stuart Russell, a professor specializing in AI, echoed Adler's sentiments, labeling the AGI race as a perilous endeavor that could lead to catastrophic outcomes if not properly managed. The recent emergence of a competing AI model from the Chinese company DeepSeek has intensified the competitive landscape, further prompting concerns that the rapid pace of development may overlook essential safety regulations.
As the global focus on AGI intensifies between leading tech powers, the dialogue on safely navigating AI advancements is becoming increasingly critical, reflective of a broader challenge facing the industry.
Weekly Newsletter
News summary by melangenews