They're willing to take liability for it, so they're confident enough that their legal team and accountants are satisfied. If Tesla were at that point I think most people here would be content, no need to definitively prove anything.
Ah yes, all's I have to do to get justice if I get killed by a Mercedes run amok is to take on a multi-billion dollar legal team. That makes me confident.
The fact that they accept liability is precisely to avoid [your next of kin] needing to "take on a multi-billion dollar legal team" if you get killed by a Mercedes run amok. Now then, if on the other hand you were to get killed by a Tesla run amok...
A big company being responsible for a crash is best case scenario. You would much rather sue Mercedes than Joe Shmoe for an accident, no doubt. They've got deep pockets, and your local courts are not particularly friendly to them
Any person who knows what they're talking about is asking Tesla to have an appropriate development process for safety critical systems.
Tesla's system are unsafe, by default, if they don't follow safety life cycle. And they don't - I saw dick-sharing apps that had better life cycle processes.
A lot of people seem to think that even a single accident is unacceptable. Quite a few of the comments on this site and others about self-driving cannot be explained without understanding that the poster has that belief, at least implicitly.
We are lucky that our ancestors were not so risk averse, because if they were we would not have cars at all, or airplanes.
> A lot of people seem to think that even a single accident is unacceptable. Quite a few of the comments on this site and others about self-driving cannot be explained without understanding that the poster has that belief, at least implicitly.
That makes sense to me. I'm happy to accept the presence of full self-driving technology on the roads, for other people, once it has an accident rate comparable to or slightly better than humans.
I personally won't use one until it is so much safer than a human that the level of safety is the #1 feature though. Until then, what's the upside? I can screw around on my phone more often? I do that too much already, and it's not the kind of benefit that cancels out the potential downside of "...but you died because of an unhandled edge case in version 27.1.828 of our software that you as an attentive human would have easily handled, which was fixed in the next release" which just seems like such a banal way to go for more screen time.
I don't think my take is drastically out of the mainstream. It also seems to me the main thing separating my point of view from "ban all FSD until perfect" is a willingness to let other people make choices I don't think are good.
Yes, I see some people promote that idea but that was never the expectation on the part of the self-driving car creators, or the regulators. They also don't expect cars to be able to solve complex philosophical questions regarding trolleys. Nor does the general public have that expectation.
I don't think it's reasonable to expect an FSD vehicle to never be involved in an accident. I do expect it to never be the cause of an accident. I really don't feel that is unreasonable.