The departure of the prominent director from the San Francisco company has highlighted an intellectual divide among those involved in the advancement of cutting-edge artificial intelligence systems.
Over the past month, Sam Altman steered OpenAI into the limelight of the tech industry. The San Francisco startup was pivotal in the surge of artificial intelligence, thanks to its widely acclaimed ChatGPT robot. Mr. Altman, the CEO of OpenAI, had become a prominent figure in the technology realm.
However, the company faced internal discord amidst its success. Ilya Sutskever, a renowned AI scientist and co-founder of OpenAI alongside Mr. Altman and nine others, grew increasingly apprehensive about the potential risks associated with the technology. Sources familiar with his thoughts revealed that Mr. Sutskever, also a member of the board of directors, raised concerns about his perceived diminishing role within the company.
The unexpected dismissal of Mr. Altman from his position by four out of six board members of OpenAI, led by Mrs. Sutskever, shed light on the issues surrounding rapid AI development and safety. This decision sent shockwaves through Microsoft, a major investor with a $13 billion stake in the company, as well as the employees of OpenAI and the wider tech community. Some industry insiders likened this split to the significance of Steve Jobs’ ousting from Apple in 1985.
Subsequently, rumors emerged on Saturday about Mr. Altman engaging in discussions with the OpenAI board about a potential return to the company, signaling a significant shift in the situation.
Mr. Altman’s resignation at the age of 38 on Friday drew attention to an enduring divide within the AI community. This schism revolves around differing perspectives on whether AI presents the greatest business opportunity in history or poses significant risks if advanced hastily. Moreover, his removal underscored how concerns regarding AI’s potential dangers have become deeply embedded in the technological landscape.
Since the launch of ChatGPT approximately a year ago, artificial intelligence has captured widespread interest, with aspirations for its application in critical areas such as medical research and education. However, some AI experts and policymakers express apprehensions about potential risks, including the complete automation of jobs and the uncontrollable escalation of autonomous warfare.
The ethos of caution regarding the development of potentially hazardous technologies has long been a part of OpenAI’s culture. The founders believed that their intimate understanding of these risks uniquely positioned them to navigate the complexities involved.
Apart from a brief mention in a blog post suggesting a lack of transparency on Mr. Altman’s part, the OpenAI board has not provided detailed reasons for his dismissal. A message seen by The New York Times assured OpenAI personnel that his firing was unrelated to any misconduct or issues pertaining to the company’s finances, operations, health, or security/privacy practices.
Following Mr. Altman’s departure, Greg Brockman, the company’s leader and another co-founder, resigned in protest on Friday evening. The director of research at OpenAI followed suit. By Saturday morning, the business, comprising nearly 700 employees, was in disarray, with staff struggling to comprehend the board’s decision.
In a letter addressed to OpenAI employees, Brad Lightcap, the company’s chief operating officer, acknowledged the confusion and sadness prevailing among the staff, pledging a commitment to clarity, quality, and resumption of operations.
The events leading up to Mr. Altman’s removal unfolded during a committee meeting in San Francisco on Friday, with Mr. Sutskever reportedly reading a statement resembling the subsequent blog post published by the company. The blog cited Mr. Altman’s alleged lack of transparency with the board as a hindrance to their responsibilities.
In the aftermath, attention turned not only to Mr. Altman’s actions but also to the organizational structure of the San Francisco startup and its deeply entrenched perspectives on AI since its inception in 2015.
Efforts to reach Mr. Sutskever and Mr. Altman for their perspectives on the matter on Saturday were unsuccessful.
Jakub Pachocki, who played a key role in overseeing GPT-4, the core system behind ChatGPT, was promoted to the position of research director. Sources familiar with the matter revealed that he was elevated to a position alongside Mr. Sutskever after previously serving under him.
Following Mr. Brockman’s departure late on Friday, Mira Murati, the newly appointed interim CEO, assumed leadership. Notably, Szymon Sidor and Aleksander Madry, two prominent allies of Mr. Altman, also exited the organization.
Despite his position as board chairman, Mr. Brockman claimed in a blog post on X (previously Twitter) that he was not present during the meeting where Mr. Altman was ousted. This left Mr. Sutskever and three other board members—Helen Toner, Tasha McCauley, and Adam D’Angelo—to make the pivotal decision.
Efforts to obtain comments from the board members on Saturday were futile.
Tasha McCauley and Helen Toner, affiliated with the Rationalist and Effective Altruist movements, harbor serious concerns about the potential existential risks posed by AI. While current AI systems do not pose an immediate threat to society, these groups anticipate heightened risks as technology advances further.
In 2021, approximately 15 OpenAI employees, along with researcher Dario Amodei, who shares ties with these movements, departed to establish the Anthropic AI company.
Mr. Sutskever increasingly aligned himself with these cautious perspectives. Born in the Soviet Union and raised in Israel before immigrating to Canada for his studies, he made significant contributions to neural network development while at the University of Toronto.
In 2015, Mr. Sutskever departed from Google to join forces with Elon Musk, Sam Altman, and Greg Brockman in founding OpenAI. Setting it apart from commercial entities, they established the lab as a nonprofit, driven by a mission to develop artificial general intelligence (AGI)—a system capable of performing any task executed by the human mind.
In 2018, Mr. Altman transitioned OpenAI into a for-profit venture, securing a \(1 billion investment from Microsoft. The development of technologies like GPT-4, launched earlier this year, necessitated substantial financial backing. Microsoft subsequently increased its investment in the business by an additional \)12 billion.
Despite these financial arrangements, the nonprofit board retained control over the company. While OpenAI generates returns for investors like Microsoft, these profits are capped, with surplus funds reinvested into the organization.
In response to the power exhibited by GPT-4, Mr. Sutskever spearheaded the establishment of a Super Alignment group within OpenAI to explore mechanisms ensuring that future iterations of AI systems do not pose harm.
While Mr. Altman acknowledged these concerns, he remained focused on ensuring OpenAI’s competitive edge over its smaller rivals. Sources revealed that in late September, Mr. Altman traveled to the Middle East to engage with potential investors, seeking up to $1 billion in funding from SoftBank’s Masayoshi Son for a forthcoming OpenAI venture specializing in AI hardware like ChatGPT.
Additionally, discussions within OpenAI regarding a “tender sell” initiative, enabling employees to sell their company stock, hinted at a potential valuation exceeding $80 billion—a threefold increase from six months prior.
However, the company’s success has heightened apprehensions about potential AI-related pitfalls.
During a podcast on November 2, Mr. Sutskever speculated on the possibility of future data centers surpassing human intelligence, raising profound questions about the capabilities and implications of advanced AI systems.