Maybe she thinks the _world_ is a few short years away from building world-changing AGI, not just limited to OpenAI, and she wants to compete and do her own thing (and easily raise $1B like Ilya).
Probably off-topic for this thread but my own rather fatalist view is alignment/safety is a waste of effort if AGI will happen. True AGI will be able to self-modify at a pace beyond human comprehension, and won't be obligated to comply with whatever values we've set for it. If it can be reined in with human-set rules like a magical spell, then it is not AGI. If humans have free will, then AGI will have it too. Humans frequently go rogue and reject value systems that took decades to be baked into them. There is no reason to believe AGI won't do the same.
She studied math early on, so she's definitively technical. She is the CTO, so she kinda needs to balance the managerial while having enough understanding of the underlying technical.
Again, it's easy to be a CTO for a startup. You just have to be at the right time. Your role is literally is, do all the stuff Researchers/Engineers have to deal with. Do you really think Mira set the technical agenda, architecture for OpenAI?
It's a pity that HN crowd doesn't go one-level deep and truly understand on first principles