Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Expanding on this, human failures and machine failures are qualitatively different in ways that make our systems generally less resilient against the machine variety, even when dealing with a theoretically near-perfect implementation. Consider a bug in an otherwise perfect self-driving car routine that causes crashes under a highly specific scenario -- roads are essentially static structures, so you've effectively concentrated 100% of crashes into (for example) 1% of corridors. Practically speaking, those corridors would be forced into a state of perpetual closure.

This is all to say that randomly distributed failures are more tolerable than a relatively smaller number of concentrated failures. Human errors are rather nice by comparison because they're inconsistent in locality while still being otherwise predictable in macroscopic terms (e.g.: on any given day, there will always be far more rear-endings than head-on collisions). When it comes to machine networks, all it takes is one firmware update for both the type & locality of their failure modes to go into a wildly different direction.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: