How did an idealistic nonprofit, hoping to ensure advanced AI “benefits humanity as a whole,” turn into a $82 billion mega-corporation cutting corners and rushing to scale commercial products?
In 2015, OpenAI announced its existence to the world through a post on their blog. In it, they write: “OpenAI is a non-profit artificial intelligence research company. Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole… Since our research is free from financial obligations, we can better focus on a positive human impact.”
Fast-forward nearly a decade, and OpenAI is a megacorporation valued at $80 billion, backed by Microsoft with a 49% ownership stake, led by a billionaire who has faced multiple allegations of deceptive and manipulative behavior, and in a race to commercialize huge AI products such as ChatGPT and Sora.
Zooming out and looking at this trajectory raises some valid questions. How is it possible for a mission-oriented nonprofit to so quickly be taken over by financial interests? What does that mean for the safety and fairness concerns at OpenAI? The answers to these questions aren’t always pretty.
Why OpenAI Abandoned Its Nonprofit Origins
In its early days, the founders of OpenAI correctly recognized that financial interests could significantly jeapordize the necessary safety and caution required to develop “artificial general intelligence,” a technical term for AIs that meet or exceed human-level intelligence. In fact, internal emails suggest that a primary motivation for the formation of OpenAI (the nonprofit) was concern that, if they didn’t begin work researching advanced AI, Google would do so instead — and if Google developed it, it would be driven primarily by financial incentives.
However, this was also in the midst of the deep learning revolution, and the so-called “bitter lesson” was becoming more and more apparent to AI researchers. According to the bitter lesson, the primary driver of AI progress would not be advancements in research and model design, but rather scaling models through training them with larger and larger amounts of data and computational resources — which costs a lot of money to obtain. In other words, even simply-designed AI models would get smarter on their own — you just had to make them much, much bigger.
When OpenAI realized the truth of the bitter lesson, they decided that the only way to keep up with Google was to raise a significant amount of capital. While they had previously had some success raising money through donations to their nonprofit, they decided that the next step would be raising money from real investors seeking a return on their capital.
For those reasons, OpenAI decided to establish a for-profit subsidiary of the nonprofit. This for-profit company (“OpenAI Global, LLC”) became the employer of most OpenAI staff, as well as the corporate vehicle that was responsible for developing and releasing OpenAI products. Ownership was held by Microsoft, OpenAI employees, a handful of venture capital funds, and other private investors.
How the nonprofit lost control of OpenAI
In OpenAI’s new corporate structure, the nonprofit was supposed to retain control of its for-profit subsidiary — principally, this included the power to fire and replace the CEO for any reason, pursuant to the mission of OpenAI (the nonprofit) to ensure AGI benefits humanity. This fact was core to OpenAI’s appeal to the public for trust. Sam Altman (the CEO of OpenAI) himself mentioned this fact at various moments, saying he thought it was “important” that the board of the nonprofit retained the power to fire him.
Altman has also faced a long history of allegations of manipulative and deceptive behavior, including from former colleagues, employees, and even co-executives at OpenAI. It seems that the board of directors must have realized this, because in the fall of 2023, they decided to exercise their right to fire Altman, claiming “he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities” and that they “no longer have confidence in his ability to continue leading OpenAI.”
Much has been made about this event — and there is still a great deal of uncertainty about the specific circumstances that led the board to make this decision. However, an independent review from the law firm WilmerHale would later affirm that this decision was well-intentioned and within the board’s discretion.
However, there was one problem: the owners of OpenAI Global, LLC (the for-profit) weren’t happy. Many employees of OpenAI had vested equity worth millions of dollars, and this leadership transition had the potential to destabilize a tender offer that would have allowed them to sell this ownership stake for cash.
Microsoft, meanwhile, had already invested billions of dollars into OpenAI Global, LLC and was not keen to see their investment tampered with by the nonprofit that legally controls OpenAI. For what we can only assume were financial reasons, OpenAI employees worked together with Altman and Microsoft to threaten the nonprofit board with an ultimatum: reinstate Altman, or the entire company would leave to join Microsoft and continue their work there.
In the end, these parties won. The board walked back its decision and reinstated Altman, and two of the board members who were involved with his firing decided to leave the board. There is plenty of reason to debate whether or not the firing of Altman was well-advised, or whether the board made the right choice in attempting to oust him. However, one thing is clear: the board did not, in practice, have the ability to fire Altman. The financial incentives were too strong, and the owners of OpenAI Global, LLC were ready to fight back.
What happened to safety concerns when financial incentives took over OpenAI?
While OpenAI was founded to ensure that the development of AI benefits humanity, it’s starting to look like that is no longer the priority. Twice now, safety staff have partaken in a mass exodus from OpenAI.
The first time was in 2020, when a handful of senior OpenAI employees left the company to found a competitor, Anthropic. Their departure was spurred by safety disagreements between OpenAI and the team that left to found the new company.
The second time was just last week. In 2023, OpenAI announced a new safety team called Superalignment that was the locus of alignment effort, helping to ensure that highly-capable future models act in humanity’s best interest. It was co-led by Jan Leike and Ilya Sutskever. However, both have now left the company, along with a number of former employees from the team (including some who were fired by OpenAI).
And leaving OpenAI isn’t like leaving any other tech company. Instead, when employees are preparing to resign, they are met with an unwelcome surprise: an implicit ultimatum, asking them to choose between signing an extremely restrictive lifetime non-disparagement agreement, or potentially losing all their vested equity in the company.
Making equity contingent upon signing such a restrictive non-disparagement agreement isn’t typical at AI labs, and it can’t help but raise questions about whether OpenAI has something to hide. In fact, as of today, Fortune is reporting that OpenAI also failed to live up to its safety commitment to provide 20% of computing resources (as of 2023) to the now-defunct superalignment team.
As far as we can tell, it seems like safety has taken a backseat at OpenAI, in favor of rushing new products to market. In fact, that’s exactly what ex-safety staff Jan Leike has to say about the company since he has left:
Source: Jan Leike on X
What can be done?
Even though OpenAI has changed radically since it was initially founded as a nonprofit to advance the public interest, it is still fundamentally accountable to its customers, partners, governments, and the general public. That’s why we are calling upon our supporters to make their voices heard and demand that OpenAI be held accountable.
Alongside a coalition of other groups, we have released an open letter demanding that OpenAI be held accountable to the public. Click the button below to sign our letter.
Cover image credit: World Economic Forum