Aligning to their goal of "protecting humanity" would mean killing OpenAI would slow down AGI development, theoretically allowing effective protections to be put into place. And it might set an example assisting the mission at other companies. But slowing down responsible development gives militaries and states of concern a lead in the race, which are the entities where the main concern should lie.