OpenAI Faces Internal Strife as Safety Researcher Exits, Raises Alarm Over AI Race
Steven Adler, a safety researcher at OpenAI, announced his departure from the company, expressing serious concerns about the growing competition for Artificial General Intelligence (AGI). In a post on X, Adler described the race as a "very risky gamble," highlighting that no laboratory currently possesses effective solutions for AI alignment—ensuring AI systems operate in accordance with human values. Adler's exit marks a notable instance in a series of departures related to internal disagreements about AI safety at OpenAI.
His comments echo warnings from experts like Stuart Russell from UC Berkeley, who characterized the AGI race as perilous, indicating a "significant probability of causing human extinction" without proper control over advanced AI systems. Adler's departure is part of a broader trend, with OpenAI reportedly losing nearly half of its safety-focused staff in recent years, according to former employees. As concerns about responsible AI development intensify, the company faces growing scrutiny from both researchers and investors on how to navigate the rapidly evolving landscape of artificial intelligence.
Weekly Newsletter
News summary by melangenews