This was pretty clearly an attempt by the board to reassert control, which was slowly slipping away as the company became more enmeshed with Microsoft.
I'm not trying to throw undeserved shade, but why do we think this is something as complex as that and not just plain incompetence? Especially given the cloak and daggers firing without consulting or notifying any of their partners beforehand. That's just immaturity.
Then, he progressively sold more and more of the companies future to Ms.
You don’t need chatgpt and it’s massive gpu consumption to achieve the goals of openai. A small research team and a few million, this company becomes a quaint quiet overachiever.
The company started to hockey stick and everyone did what they knew, Sam got the investment and money. The tech team hunkered down and delivered gpt-4 and soon -5
Was there a different path? Maybe.
Was there a path that didn’t lead to selling the company for “laundry buddy”, maybe also.
On the other hand, Ms knew what they were getting into when its hundredth lawyer signed off on the investment. To now turn around as surprised pikachu when the board starts to do its job and their man on the ground gets the boot is laughable.
You're arguing their most viable path was to fire him, wreak havoc and immediately seek to rehire and further empower him whilst diminishing themselves in the process? It's so convoluted, it just might work!
Whether fulfilling their mission or succumbing to palace intrigue, it was a gamble they took. If they didn't realize it was a gamble, then they didn't think hard enough first. If they did realize the risks, but thought they must, then they didn't explore their options sufficiently. They thought their hand was unbeatable. They never even opened the playbook.
Oh, then my apologies, it's unclear to me what you're arguing; That the disaster they find themselves in wasn't foreseeable?
That would imply they couldn’t have considered that Altman was beloved by vital and devoted employees? That big investors would be livid and take action? That the world would be shocked by a successful CEO being unceremoniously sacked during unprecedented success, with (unsubstantiated) allegations of wrongdoing, and leap on the story. Generally those are the kinds of things that would have come up on a "Fire Sam Pro and Cons" list. Or any kind "what's the best way to get what we want and avoid disaster" planning session. They made the way it was done the story, and if they had a good reason, it's been obscured and undermined by attempting to reinstate him.
We're still waiting for the explanations from Altman about the alleged involvement in spending time on conflicting companies while he is CEO at OpenAI.
According to FT this could be the cause for the firing:
“Sam has a company called Oklo, and [was trying to launch] a device company and a chip company (for AI). The rank and file at OpenAI don’t dispute those are important. The dispute is that OpenAI doesn’t own a piece. If he’s making a ton of money from companies around OpenAI there are potential conflicts of interest.”
Isn’t it amazing how companies worry about lowly, ordinary employees moonlighting, but C-suiters and board members being involved in several ventures is totally normal?
I don’t see how that factors in. What matters is OpenAI’s enterprise customers reading about a boardroom coup in the WSJ. Completely avoidable destruction of value.
I think what people in this thread and others are trying to say is that to run a organization like OpenAI you need lots and lots funding. AI research is incredibly costly due to highly paid researchers and an ungodly amount of GPU resources. To put all current funding at risk by pissing off current investors and enterprise customers puts the whole mission of the organization at risk. That's where the perceived incompetence comes from no mater how good the intentions are.
I understand that. What is missing is the purpose of running such an organisation. OpenAI has achieved a lot, but is it going to the direction and towards the purpose it was founded on? I do not see how one can argue that. For a non-profit, creating value is a means to a goal, not a goal in itself (as opposed to a for-profit org). People thinking that the problem of this move is that it destroys value for openAI showcase the real issue perfectly.
Some would say it is the opposite way around. Mission of openAI was not supposed to be maximising profit/value. Especially if it can be argued that this exactly goes against its original purpose.
It is hard to negotiate when the investors and for-profit part basically has much more power. They tried to bring them in front of a fait accompli situation, as this was their only chance, but they seem to have failed. I do not think they had a better move in the current situation right now, sadly.
You do not fire a CEO because you hold some personal grudges towards them. You fire them because they do something wrong. And I do not see any evidence or indication of smearing Altman, unless they lie about (ie I do not see any indication of them lying about it).
>Bigger concern would be the construction of a bomb, which, still, takes a lot of hard to hide resources.
The average postgraduate in physics can design a nuclear bomb. That ship sailed in the 1960s. Anyone who uses that as an argument wants a censorship regime that the medieval catholic church would find excessive.
To be fair it is a very subjective term, god-like. You could make a claim for many different technical advancements to represent god-like capabilities. I'd claim that many examples exist to day, but many of them are not readily available to most people for inherent or regulatory reasons.
Now, I feel even just "OK" agential AI would represent god-like abilities. Being able to spawn digital homunculi that do your bidding for relatively cheap and with limited knowledge and skill required on the part of the conjuror.
Again, this is very subjective. You might feel that god-like means an entity that can build Dyson Spheres and bend reality to it's will. That is certainly god-like, but just a much higher threshold than what I'd use.
If Microsoft had to put out a statement "its all good we got the source code" clearly the openness of OpenAI was lost a while ago. This move of the board was presumably primarily good for the board.
It would be wise to view any investment in OpenAI Global, LLC in the spirit of a donation.
The Company exists to advance OpenAl, Inc.'s mission of ensuring that safe artificial general intelligence is developed and benefits all of humanity. The Company's duty to this mission and the principles advanced in the OpenAI, Inc. Charter take precedence over any obligation to generate a profit. The Company may never make a profit, and the Company is under no obligation to do so. The Company is free to re-invest any or all of the Company's cash flow into research and development activities and/or related expenses without any obligation to the Members.
I guess safe artificial general intelligence is developed and benefits all of humanity, means an open AI (hence the name) and safe AI.
no. it's anti-openness.
the true value in ai/agi is the ability to control the output. the "safe" part of this is controlling the political slant that "open" ai models allow. the technology itself has much less value than the control that is possible to those who decide what is "safe" and what isn't. it's akin to raiding the libraries and removing any book or idea or reference to historical event that isn't culturally popular.
"The board" isn't exactly a single entity. Even if the current board made this decision unanimously, they were a minority at the beginning of the year.