Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think we are overestimating amount of intelligence we humans need while driving...


The edge cases are unclear: construction zone, heavy equipment, school kids, another car poised to enter the road.


They are probabilistically modeled already. You keep tracking other objects and compute various possibilities for their movement and depending on the past and present behavior pick the most probable one. If car encounters lower-than-threshold probability, it can always slow down or stop, like what non-reckless humans would do. Even end-to-end deep learning can recognize construction zones and learn to drive around traffic cones.


The edge cases are endless (wheelchair woman chasing a chicken), and without a semantic world model the car will have no idea what to do. Low-level object avoidance architecture isn't enough.


Actually that case would be detected as two objects on a collision course and car would stop/slow down. Mind that sensors scan surroundings with high frequency and some of them can see "beyond the walls" like radar. Ultrasonic sensors can also detect the state of road surface (dry, wet, icy etc.).


Yes, but when can the car proceed, and in which direction? Helps to know what a wheelchair, a woman and a chicken are and how they behave. Helps to be able to speak with them.

Superhuman sensors will be great, but you really can't dodge AGI for true full self-driving.


It's probably super easy for the car to stop and open the window for a person to talk ;-)


True, you could have the passenger making "shoo" noises :D




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: