This matches far better with the board's letter re: firing Sam than a simple power struggle or disagreement on commercialisation.
Seeing a huge breakthrough and then not reporting it to the board, who then find out via staff letter certainly counts as a "lack of candour"....
As an aside, assuming a doomsday scenario, how long can secrets like this stay outside of the hands of bad actors? On a scale of 1 to enriched uranium
Not long at all. Presumably you could write the method on the back of a napkin to lead another top AI researcher to the same result. That’s why trying to sit on breakthroughs is the worst option and making sure they are widely distributed along with alignment methods is the best option.
Yudkowsky’s doomsday cult almost blew OpenAI to pieces and sent everyone who knows the details in the wind like dandelion seeds. What’s next? A datacenter bombing or killing key researchers? We should be happy that this particular attempt failed, because this cult is only capable of strategic actions that make things far more dangerous.
This will be solved like all other engineering and science: with experiments and iteration, in a controlled setting where potential accidents will have small consequences.
An unaligned system isn’t even useful, let alone safe. If it turns out that unaligned AGI is very hard, we will obviously not deploy it into the world at scale. It’s bad for the bottom line to be dead.
But there’s truly no way out but forward; game theory constrains paranoid actors more than the reckless. A good balance must be found, and we’re pretty close to it.
None of the «lesswrong» doomsday hypotheses have much evidence for them, if that changes then we will reassess.
I have no overall position, but climate change and nuclear weapons seem two quite strong counterexamples to this being a sufficient condition for safety.
I think if we have a major AI induced calamity... Then I worry much more. Although... Enough scary capability in a short enough period of time... I could see violence being on the table for the more radical amongst the group.
Your concern is very interesting though, and I think important to consider. I wonder if the FBI agrees.
When Yudkowsky says that it’s an acceptable outcome to be left with a minimum viable breeding population after a nuclear war, triggered by enforcement of AI slowdown demands, that is as far from non-violent you could get without someone actually being threatened directly.
The AI doom cult is not a peaceful movement. When an extremist tells you what they will do, you damn well listen to them.
Why do you perceive Altman as "slimy with limitless ambition"? I've always perceived him as being quite humble from his interviews and podcast appearances.
As an aside, assuming a doomsday scenario, how long can secrets like this stay outside of the hands of bad actors? On a scale of 1 to enriched uranium