Sam Altman, the co-founder and CEO of OpenAI, the company behind the popular chatbot ChatGPT, was fired by the board of directors on Friday, November 17, 2023, for allegedly lying to them about his communications and actions. The board accused Altman of compromising the safety and ethics of artificial intelligence by focusing too much on commercialization and profit. Altman was also removed from the board, along with Greg Brockman, the chair and president of OpenAI, who resigned in protest. Mira Murati, the chief technology officer, was appointed as the interim CEO.
However, the decision was reversed on Monday, November 20, 2023, after a massive backlash from the employees, investors, and the AI community. Altman and Brockman were reinstated to their positions, and the board issued an apology for their “hasty and ill-informed” action. The board also announced that it would conduct a thorough review of its governance and decision-making processes, and seek external advice from experts and stakeholders.
Why did it matter?
The controversy highlighted the challenges and tensions that arise when trying to balance the social and scientific goals of AI research with the economic and competitive pressures of the market. OpenAI was founded in 2015 as a non-profit organization with a mission to ensure that AI is aligned with human values and can benefit all of humanity. However, in 2019, it created a for-profit subsidiary, OpenAI LP, to attract more funding and talent, and to compete with other tech giants like Google and Facebook. Microsoft invested $1 billion in OpenAI LP, and became its exclusive cloud provider.
Altman, who joined OpenAI as a co-founder and board member in 2015, became the CEO of OpenAI LP in 2019. He was instrumental in developing and launching ChatGPT, a conversational AI system that uses a large-scale neural network to generate natural and coherent responses to user queries. ChatGPT has been widely praised for its innovation and performance, and has attracted more than 100 million users since its debut in November 2022. Altman also announced plans to create a new AI platform called OpenAI Codex, which would allow developers to create and deploy AI applications using natural language commands.
However, some board members and researchers at OpenAI were concerned that Altman was prioritizing the commercial success of ChatGPT and OpenAI Codex over the safety and ethics of AI. They feared that Altman was not transparent about his dealings with Microsoft and other potential partners, and that he was not following the principles and policies of OpenAI, such as ensuring that AI is accessible, trustworthy, and beneficial for everyone. They also claimed that Altman was ignoring or dismissing the feedback and criticism from the internal and external reviewers of ChatGPT and OpenAI Codex, and that he was not addressing the potential risks and harms of these systems, such as bias, privacy, security, and misuse.
What are the implications?
The board’s decision to fire Altman sparked a strong reaction from the employees, investors, and the AI community, who expressed their support and admiration for Altman and his vision. They argued that Altman was a visionary leader who had made significant contributions to the field of AI, and that he was trying to democratize and advance AI for the benefit of society. They also criticized the board for acting impulsively and unfairly, and for undermining the credibility and reputation of OpenAI. They demanded that the board reinstate Altman and Brockman, and apologize for their mistake.
The board’s reversal of its decision was seen as a victory for Altman and his supporters, and as a recognition of his achievements and influence. However, it also raised questions about the future direction and governance of OpenAI, and the role and responsibility of its board. The board admitted that it had failed to communicate and collaborate effectively with Altman and the rest of the leadership team, and that it had not been sufficiently involved and informed about the strategic and operational decisions of OpenAI. The board also acknowledged that it had not been clear and consistent about its expectations and standards for Altman and the company, and that it had not provided adequate guidance and oversight for the development and deployment of ChatGPT and OpenAI Codex.
The board promised to improve its governance and decision-making processes, and to seek external advice from experts and stakeholders. The board also said that it would work closely with Altman and the leadership team to align their vision and goals, and to ensure that OpenAI remains true to its mission and values. The board expressed its commitment to supporting and empowering Altman and the company, and to fostering a culture of trust, transparency, and collaboration within OpenAI.