“`html
OpenAI has made significant changes to its safety operations by disbanding its AGI Readiness team, which was focused on preparing for artificial general intelligence. Miles Brundage, a senior advisor from the team, announced this development in a recent Substack post, where he also revealed his departure from the organization.
OpenAI’s funding round values the company at $157 billion
In his announcement, Brundage expressed a desire for greater independence in his work, indicating that his decision to leave stems from an aspiration to influence AI development from outside the corporate environment. “I realized I want to make an impact on AI’s evolution beyond the confines of industry,” he stated, adding that he had accomplished much of what he aimed for during his tenure at OpenAI.
Brundage further articulated broader concerns regarding readiness within both OpenAI and other leading labs. He emphasized that neither they nor society is adequately prepared for AGI advancements. This sentiment appears to resonate with many senior figures within OpenAI as well. Following the dissolution of the AGI Readiness team, former members will be reassigned to various other departments within OpenAI.
Sam Altman resigns as head of OpenAI’s safety division
A spokesperson for OpenAI informed CNBC that they support Brundage’s decision to move forward. However, this transition comes at a challenging time for OpenAI as it faces an exodus of senior leadership when stability is crucial. Although they recently recruited a prominent AI researcher from Microsoft, this addition does not sufficiently address several vacancies in their upper management.
The ongoing leadership changes and team dissolutions are exacerbating concerns about OpenAI’s trajectory toward AGI—especially following its controversial shift towards becoming fully profit-driven after initially starting as a nonprofit organization.
Earlier this year in May, the company also disbanded its SuperAlignment team—a group dedicated to achieving “scientific and technical breakthroughs necessary for guiding and controlling AI systems far more intelligent than humans.” Concurrently, there were reports about reassigning its chief AI safety leader, raising eyebrows among experts concerned with ethical implications in artificial intelligence.
Source
“`