Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> 99% isn’t good enough for truly critical applications, especially when you don’t know for sure that it’s actually 99%; there’s no way to detect which 1% might be wrong; there’s no real path to 100%; and critically: there’s no one to hold responsible for getting it wrong.

AI also exposes the possibility of systemic error where humans would be stochastic.

A human might only identify the right number of rentable units from a spreadsheet (to pick an example from this article) 97% of the time when an AI might do it 99% of the time, but even the same human will have a different 3% error on each day. The consequences of failure are more limited and more dilute.

On the other hand, the AI may work perfectly right up until a holding company redesigns their data tables for the 100th time, whereupon it misreads every financial report with much more concentrated ill effect.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: