Under the arrangement at the time the board's duty was to the mission of "advance digital intelligence in the way that is most likely to benefit humanity as a whole."
Implied in that is that if it can't advance it in a way that is beneficial then it will not advance it at all. It's easy to imagine a situation where the board could feel their obligation to the mission is to blow up the company. There's nothing contradictory in that nor do they have to be ML experts to do it.
It's weird and surprising that this was the governance structure at all, and I'm sure it won't ever be again. But given that it was, there's nothing particularly broken about this outcome.
Implied in that is that if it can't advance it in a way that is beneficial then it will not advance it at all. It's easy to imagine a situation where the board could feel their obligation to the mission is to blow up the company. There's nothing contradictory in that nor do they have to be ML experts to do it.
It's weird and surprising that this was the governance structure at all, and I'm sure it won't ever be again. But given that it was, there's nothing particularly broken about this outcome.