Altman has agreement to return, while most of the board that fired him is out.
Meanwhile, some new details emerged about the days leading up to Altman's firing. "In the weeks leading up to his shocking ouster from OpenAI, Sam Altman was actively working to raise billions from some of the world's largest investors for a new chip venture," Bloomberg reported. Altman reportedly was traveling in the Middle East to raise money for "an AI-focused chip company" that would compete against Nvidia.
As Bloomberg wrote, "The board and Altman had differences of opinion on AI safety, the speed of development of the technology and the commercialization of the company, according to a person familiar with the matter. Altman's ambitions and side ventures added complexity to an already strained relationship with the board."
"According to people familiar with the board's thinking, members had grown so untrusting of Altman that they felt it necessary to double-check nearly everything he told them," the WSJ report said. The sources said it wasn't a single incident that led to the firing, "but a consistent, slow erosion of trust over time that made them increasingly uneasy," the WSJ article said. "Also complicating matters were Altman's mounting list of outside AI-related ventures, which raised questions for the board about how OpenAI's technology or intellectual property could be used."
OpenAI said the "new initial board" will consist of D'Angelo, economist Larry Summers, and former Salesforce co-CEO Bret Taylor, who will be the chair.
Those pesky board members with their annoying AI safety ideals are gone, replaced by new board members with excellent experience in squeezing profits. Next they'll probably attempt to turn the non-profit parent org into a for profit corporation so they can get equity/stock grants. Yay!
Because "AI" hype is what the venture capitalists are feeding to the financial and tech press theses days and Sam is the venture capitalists biggest "AI" star because he's a good snake oil salesman.
While not inaccurate, that is extremely reductive. The rapid improvement of AI at the transformer level is currently one of the most interesting things happening across many fields including arts and sciences, that also has the widest deviation between potential good and potential harm. OpenAI and its complex governance model are directly at the center of that growth and embroiled in one of the most fascinating governance struggles in recent history.
This drama when combined with how disruptive this technology is likely to be across a wide range of markets affecting the world’s economies makes this interesting and also has the added benefit of being a news departure from the bombings and other terrible stuff going on around the world. Much more fun for popcorn and chat than wars and such.
While large language models and similar "AI" technologies are very overhyped, they are already plenty usable for things like deepfakes which if left unchecked have significant potential to be weaponised and destabilize societies.
OpenAI is a non-profit that's behind those machine learning models and practical applications like ChatGPT. In principle it should govern development so that it's safe and responsible. There are many allegations that Sam Altman became focused on profit betraying non-profit mission.
While OpenAI is not technically controlled by commercial entities (it has 49% stake by Microsoft) it's entirely dependent on them for funding which likely led to being strong-armed to have Altman regain control.
I’m still confused how their chief scientist was part of the coup to remove Altman and at the same was one of the signatures on the letter demanding his return.
I actually think it was because of Greg Brockman previous head of the board that quit after hearing the news about Altman.
They told him he was vital to the company after firing Sam and removing him from the board.
Ilya their chief scientist officiated Greg and his wifes wedding. Apparantly Greg wifes pleaded to Ilya to support their return.
I think the main issue here is open ai’s stated goal of developing safe agi to benefit all of humanity, destruction of the company and not making any profit would be align with that.
However with so many player developing for profit ai
catching up it is probably safter to have an openai raking risks then not having any openai at all.
Ilya probably hoped that Greg and most co workers would stay without Altman, but since they where not the outcome prospects became worse enough to regret.
The three who are leaving the board are OpenAI Chief Scientist Ilya Sutskever, entrepreneur Tasha McCauley, and Helen Toner of the Georgetown Center for Security and Emerging Technology.
OpenAI's interim CEO, Emmett Shear, who led the company for a few days, wrote, "I am deeply pleased by this result, after ~72 very intense hours of work.
"In the weeks leading up to his shocking ouster from OpenAI, Sam Altman was actively working to raise billions from some of the world's largest investors for a new chip venture," Bloomberg reported.
As Bloomberg wrote, "The board and Altman had differences of opinion on AI safety, the speed of development of the technology and the commercialization of the company, according to a person familiar with the matter.
A Wall Street Journal behind-the-scenes report noted that the nonprofit board's mission is to "ensur[e] the company develops AI for humanity's benefit—even if that means wiping out its investors."
The sources said it wasn't a single incident that led to the firing, "but a consistent, slow erosion of trust over time that made them increasingly uneasy," the WSJ article said.
The original article contains 772 words, the summary contains 184 words. Saved 76%. I'm a bot and I'm open source!