One of the features of the common human firmware is self-preservation instinct. It lets us trust that our fellow drivers, while still prone to mistakes, won't generally make obviously suicidal errors. Can one say the same about a new ML algorithm running on some board designed half a decade ago? How exactly would one know, without a thorough audit?