Personally, I'd argue that if the AI killed someone due to being incompetent (as in, a human in a fit state to drive would not have made this mistake), the punishment should go to the corporation that signed off on the AI passing all relevant tests.
The nature of the punishment does not necessarily follow the same rules as for human incompetence, e.g. if the error occurs due to some surprising combination of circumstances that no reasonable tester would have thought to test, which I can't really give an example of because anything I can think of is absolutely something a reasonable tester would have thought to test, but for the sake of talking about it without taking this too seriously consider if a celebrity is crossing a road while a large poster of their own face is right behind them.
Let me re-iterate my original caution: human drivers are really bad: more than 40,000 people die in car crashes every year! If a self driving cars makes mistakes that humans would not in some cases, but overall they would only cause 30,000 deaths per year then I want self driving required. Thus I want liability to reflect not perfection is required but that they are better than humans.
Don't get me wrong, perfection should be the long term goal. However I will settle for less than perfection today so long as it is better.
Though better is itself hard to figure out - drunk (or otherwise impaired drivers) are a significant factor in car deaths, as is bad weather when self driving currently doesn't operate at all. Statistics do need to make sure self driving cars are better than non-impaired drivers in all situations where humans driver before they can claim better. (I know some data is collected, but so far I haven't seen any independent analysis. The potentially biased analysis looks good though - but again it is missing all weather conditions)
The AI's benefits should be irrefutable, but this isn't as simple as "at least x10 better than human drivers", or any fixed factor, it's that whatever mistakes they do make, if you show the video of a crash to the general public, the public generally agrees they'd have also crashed under those conditions.
Right now… Tesla likes to show off stats that suggest accidents go down while their software is active, but then we see videos like this, and go "no sane human would ever do this", and it does not make people feel comfortable with the tech: https://electrek.co/2025/05/23/tesla-full-self-driving-veers...
Every single way the human vision system fails, if an AI also makes that mistake, it won't get blamed for it. If it solves every single one of those perception errors we're vulnerable to (what colour is that dress, is that a duck or a rabbit, is that an old woman close up facing us or a young woman from a distance looking away from us, etc.) but also brings in a few new failure modes we don't have, it won't get trusted.