Both Tesla and Waymo are at too low an abstraction level for resilience. Cars need to understand roads/walls as concepts and make common sense inferences.
They're not comparable at all. Google/Waymo uses lidar and has been at it since 2009 and per https://waymo.com/faq they are "fully automated vehicle" but alas not for sale yet.
Tesla's self-driving vaporware being sold since Oct'16 is camera only and should be activated any day now ?
Waymo’s approach is also brittle, e.g. to changes in the road since the last map pass.
Tesla also uses radar and ultrasonic. Lidar isn’t necessary, just makes depth measurement easier. It’s lower res than cameras, low range, power-inefficient, conflicts with other lidar. Humans drive fine in sunlight, without a coherent IR scanning beam.
How do you know how much Waymo's approach rely on maps vs. vision? My expectation would be that maps are really only used for "macro" navigation but anything beyond that (lanes, avoiding obstacles, etc.) would be done on the spot.
Do you have any public source detailing this or do you just make this claim based on your own assumptions?
(Disclaimer: I work at Google but have no idea how our self-driving cars work.)
> Of course our streets are ever-changing, so our cars need to be able to recognize new conditions and make adjustments in real-time. For example, we can detect signs of construction (orange cones, workmen in vests, etc.) and understand that we may have to merge to bypass a closed lane, or that other road users may behave differently.
Doesn't sound like the cars blindly trust potentially stale maps.
>> They're not comparable at all. Google/Waymo uses lidar and has been at it since 2009 and per https://waymo.com/faq they are "fully automated vehicle" but alas not for sale yet.
If it's already fully automated, then why is it not for sale, yet?
My advice to the company BOD & shareholders: have him take the Udacity's self driving car online course and after the first project, he'll clearly understand how limited detecting lanes by CV only can be. To Uber's ex-CEO's point: LIDAR is the SAUCE.
Musk's central point is that if humans can drive with two cameras, so can a machine. And he's right. Why wouldn't we do just as well as the visual cortex?
Humans have superior intelect, even if specific performance is lower. We have experince. We learned to drive, in the particular area where you operate your car generally, with all its idiosyncorcies. Then consider eye contact, nonverbal communication, bias, and personal investment in outcome that computers are incapable of. Its not as simple as better sensors and reaction time.
But isn't the whole point of autonomous cars that humans are pretty shitty drivers? If I could augment my vision with a 360º setup of cameras and LIDAR you better believe I would!
I'd say people are actually pretty good drivers. I'm more interested in autonomous for the time savings than I am for the potential safety improvements.
Tens of thousands of people are killed every year[1] in just the United States, humans are awful at driving. If autonomous vehicles are able to make driving as safe as flying it will be like curing breast cancer in terms of lives saved.
our eyes are a lot better than cameras in a lot of ways. Eyes have better dynamic range, better sensitivity in low light, extremely high resolution in the center, and an extremely wide field of view. The nerve cells in our retina are also wired up to do a lot of processing in real time at extremely high resolution, eg. things like motion/zoom/edge detection.
And that‘s not even taking into account that we actually understand what we see and can reason about unexpected input and react accordingly.