The Information sometimes unlocks articles for HN readers, but for those that don't have access the biggest takeaway to me is that Uber has made relatively little progress towards fully autonomous cars despite a huge amount of investment.
The most shocking detail:
> Uber’s vehicles had also been having what the company refers to as a bad experience—such as a sudden jerk or a potentially dangerous movement—every one-third of a mile on average.
That's really bad. While there are no uniform standards for what a "bad experience" entails, my general sense is that Waymo and Cruise (widely considered first and second) are doing 10-100x better by that metric.
Another damning bit:
> Uber couldn’t even get the prototype to drive a one-mile stretch between the unit’s two Pittsburgh offices, with the goal of shuttling employees back and forth
I don't even know what to say to that. If you can't make your cars work reliably on a known, one-mile stretch of road, what have you been doing for so many years?
[Disclaimer: I work for Lyft, but not on the self-driving car program.]
Autonomous driving to me seems a lot like the domain of robotic crowd navigation, and the crowd navigation research seems to show us that most of the methods suspected in-use by AV programs tend to fall apart under trivial thresholds of uncertainty/congestion. So, I'm less surprised that Uber didn't make much fundamental progress here, and more surprised that anybody else has done better.
edit: I say this as someone who very much wants to see this class of science & technology succeed, but I often wonder if we're 2 or 3 Fields Medals away from really getting fundamentally closer to success.
Indeed amazing because I know employees at Tesla who take it home on autopilot in the freeway! It’s also very unsafe ans I dot believe camera only is the way to go, but at least it can drive for more than a mile.
Freeway driving is much, much easier than in-town roads - there's been successful autonomous freeway driving since the 90s.
When you drive on the freeway, everyone's going the same way (or at least you'd hope), the configuration of the road is well known, lines are consistent, etc. It's all the little things that need to be accounted for - opposing traffic, pedestrians, parked cars, random obstacles, and any number of other road, traffic and obstacle configurations that exist, that make consistently and safely driving elsewhere difficult for autonomous systems.
>Indeed amazing because I know employees at Tesla who take it home on autopilot in the freeway! It’s also very unsafe ans I dot believe camera only is the way to go, but at least it can drive for more than a mile.
Google/Waymo did a pilot - letting employees commute - and canceled because the system was too good and staff stopped paying attention.
This is a clip, there was a longer clip and description in a Waymo CEO speech in the past year or two.
The problem isn't that it's "so good", it's that it's "usually perfect".
Most California weather driving is very mundane and boring. ...but the moment you throw an outside variable into the mix - the computer cannot identify objects, and therefore cannot predict outcomes.
We're still teaching them to recognize Stop signs. How are they supposed to distinguish between a floating plastic bag and a 5 year old child?
At this point I'm starting to be willing to consider it fairly safe when the driver is paying attention. Every press article about an autopilot crash seems to involve somebody who is completely inattentive. I can't recall any published incidents involving a driver using it as an assist, while continuing to pay attention.
I bought an iRobot and I really think if they incorporated a camera it could be significantly better. There are obvious things it picks up and gets stuck in like shoelaces and power cables. Obviously a vacuum is not self driving but it’s a similar problem and at least my casual observation is that proximity sensors along won’t solve real world navigation because we need some kind of inference to understand what the impact of an objects presence would be if we ran over it or not... I believe there has to be a solution with many sensors visual and lidar mapping makes sense too
The iRobot is actually not really a similar problem in scope at all.
I can appreciate why the lay person may believe them to be similar, but it's hard to express just how much more challenging even the most basic form of driver assistance technology is in comparison to an autonomous vacuum.
The biggest challenge is speed. Just look at the napkin math of detection range of sensors versus stopping distance of a car. The higher the speed, the more range you need... and lidar, radar, etc. all have a fixed maximum detection range.
The rule of thumb is 50m for radar/lidar depending on how much you are willing to pay for the sensor. To stop the car safely, will take 56m at 100km/h.
Then ofcourse, the rest of the struggle is all in exception handling (getting from 70% to 99%).
Comparatively, the iRobot can use the dumbest and cheapest sensors and still be fully aware of everything around it in its relatively slow path of travel. It also does not need to predict the paths of travel of everything around it. It's enough for it to slow down or stop, especially at such low speed. If it hits something or fails to recognise an obstacle, it's not really a big deal. In comparison, lives can be lost in the car and there will be enquiries and legal battles.
I would put good money on the camera based automation delivering the best product at the end of the day, but requiring an order of magnitude more research and development than a comparitively "dumb" lidar setup. The lidar setup will have better immediate results, but probably will never reach the 99.999% effectiveness that everyone is chasing, and will probably never be safe at higher speeds.
It is. And it's a nice feature that at least the luxury car makers are iterating on. The problem is that it isn't very interesting for folks who don't want to own a car and/or want a robo-driver to take them door-to-door all the time. A real freeway autopilot feature is still a nice convenience/safety feature but you still need a licensed sober driver for the endpoints of the trip.
much higher stakes and much less response time. There are plenty of city drivers that get confused and, well, pull over or even just stop. it's irritating, but a sensible thing to do.
On a freeway, that's terrifying.
It _seems_ like lots of parallel systems, each with the capability to reduce speed or stop could cope with city traffic about as well as a tourist. but, uh, I guess $2.5B says otherwise.
People manage to drive with eyes only - and with a far more restricted perspective than a ring of cameras around the car.
I do think Musk was (in part) wrong about LIDAR - I imagine that it could provide a valuable source of supplemental data to be used in training and classification.
The relative safety and performance of autonomous anything is very, very hard to measure - there are so many variables.
Even on identical systems, different parameters will create totally different outcomes.
Maybe it has 'sudden movements' because it's actually a lot after and designed to react to things that Waymo would just plow right through. Maybe they're using crappy non-production sensors. Maybe they are entirely relying on instruments and not indirect data like GPS. I have no idea really, other than to say, these are very sophisticated systems and given the obvious opacity of the work ... it's going to be hard to tell what's what.
Absolutely. Great comment, however it's probably worth pointing out that the research and trial sensors are going to be high end, while the finished product will use the cheapest available or commericial sensors.
> That's really bad. While there are no uniform standards for what a "bad experience" entails, my general sense is that Waymo and Cruise (widely considered first and second) are doing 10-100x better by that metric.
For for that matter, ever 3 or 30 miles would also be really bad in my book.
Everyone has made disappointingly little progress on full self-driving in the sense of driving little Johnny from the house to soccer practice by himself.
There are possibly opportunities on freeways but anyone who expects to be able to get robo-Ubers door-to-door for decades is likely to be disappointed.
> Everyone has made disappointingly little progress on full self-driving in the sense of driving little Johnny from the house to soccer practice by himself.
Here's a video of Waymo CTO Dmitri Dolgov claiming that he's perfectly comfortable putting his children in a Waymo car without a driver: https://youtu.be/fZHDvKw0QTA?t=952. Whether he's bluffing or not, we don't know.
It will possibly be safe enough under some well-tested limited circumstances/locations. I'm skeptical that translates into something made generally available.
It blows my mind to see people write things like this. We have two companies, Cruise and Waymo, who have built nothing but toy cars, but they are in the lead while Tesla has years of real-world driving data in cars that are in a manufacturing production operation, receive over the air updates, recognize real world objects, can change lanes automatically, but Waymo is ahead?
I’m open to being convinced that I’m not correct here, but it’s really difficult for me to see how Tesla is anything but 1000x ahead of everybody else.
It reminds me of people who build great models on Jupyter Notebook, but have no clue how about the data engineering that goes into serving those models (90% of the work).
You can't drive a Tesla in "self-driving" mode in non-highway traffic. Solving Adaptive Cruise Control (ACC) and Automated Lane Centering (ALC) on highway traffic are much easier tasks than what Waymo and Cruise are solving. You can buy a device from comma.ai that gives you 90% of Tesla's functionality for a few hundred dollars.
I can’t drive a Waymo or Cruise car at all. So they may as well not exist. But Tesla is supposed to be releasing self-driving on city streets soon. And let’s make sure we are clear here, you don’t need to be on a highway to use Autopilot.
Do you own a Tesla? We previously had a car with ACC/ALC and I don’t find it to be even close to the same thing, but that’s me.
Comma.Ai’s device is great. It’s better than anything anybody else (Waymo/Cruise included) has put out there.
> But Tesla is supposed to be releasing self-driving on city streets soon.
Soon as it was in 2016 [1], 2018 [2] in Feb 2019 [3], Mar 2020 [4], or July 2020 [5]?
They have been hyping this up for years, why should I still believe those claims that it will soon be here™? Not only that but they have already advertised and sold "full self-driving" for people that can't use it yet, no beta, no alpha, just doesn't exist and is a paid feature. How far is this from fraud?
I think this is regulatory. I have seen multiple articles like the attached which have caused Tesla to basically handicap it's own autopilot. As another said I have no idea how a company with a car is not ahead of a company without one.
Tesla didn't support stop signs or traffic lights until a few months ago but Waymo did that years ago. And AFAIK Tesla still can't recognize stopped vehicles on the highway and has no public plan to fix that.
Waymo hasn’t produce a single production car. I doubt their software works on random stop signs in Cleveland. Until I see them on the road, I’m ok watching incremental improvement happen. Being first doesn’t matter.
> And AFAIK Tesla still can't recognize stopped vehicles on the highway and has no public plan to fix that.
Can you elaborate on what you mean by this? I do have a Tesla (without full self-driving) and it comes to a complete stop when cars in front of me stop on the highway.
I guess the question is how you rank a prototype that's actually autonomous against a production car that's far from autonomous. IMO manufacturing cars sounds a lot easier than autonomy. There's also the fact that Waymo's goal is only autonomy within areas that have been mapped in hyper-detail while Tesla apparently wants to drive anywhere. How do you rank more progress towards a smaller goal vs. less progress towards a more ambitious goal?
I don’t give it much rank because it doesn’t matter to me. I’d rather have good, incremental progress toward a larger goal for sure. In my day-to-day I can use a Tesla. I have no use for Waymo so the progress isn’t relevant. Will Waymo map every street in hyper detail, or just Mountain View and New York? If I don’t care about the underlying goal, what reason do I have to care about progress toward that goal?
> “ Just before the crash, a large vehicle — a sport utility vehicle or pickup truck — changed lanes in front of him, the driver told the NTSB.”
If there is a stationary car and I turn on autopilot as I approach my car doesn’t ram into it. If I turn autopilot on when I’m stationary and the car in front of me is stationary it does not drive into the other car. If autopilot is on and a car is stopped at a red light in front of me my car does not slam into that car. I do this on a daily basis. Do you?
I don't drive a Tesla, but I’m guessing that’s done via image classifiers which recognize car rear ends, which have failed Tesla drivers on stopped fire trucks, police SUVs, Chinese garbage trucks, trailers crossing the freeway, etc.
The radar returns on stationary objects are discarded AFAIK because of poor radar resolution leading to inability to precisely locate objects both horizontally and vertically, leading to roadside and overhead signs being mistaken for cars.
Common auto radars have limited vertical discrimination, and this is partly intrinsic due to radar wavelengths.
I can't say I've come across these cases (stopped Firetruck for example on a highway) where AutoPilot has driven me straight into something. I'm also paying attention, as you should.
To throw you a bone, I do think that the names of the software lead people to believe that it's something more than what it is. Full self-driving is going to do things like swerve to avoid an obstacle in the middle of the road and it probably doesn't do that right now. But if we go back to the original post, the discussion was about comparisons to Waymo and Cruise. I'm not sure I trust their "more advanced systems" to do that either, and on top of that they haven't made a single production car that works in Erie, Pennsylvania and drives in the snow, changes lanes automatically, and navigates on and off ramps.
Tesla is making progress toward general self-driving, which is far and away more valuable in my opinion. Even if that progress is incremental. I look at it as more of a safety feature and something to make driving on the highway much easier.
If we want to take this a step further, I'm actually not a fan of self-driving cars because it makes it too easy to drive everywhere, and I view driving as a problem. We should be walking.
This seems pretty accurate, $8k for something that was called "Full self driving" that was supposed to be close to ready years ago and instead won't be ready for the lifetime of the car. This would have been a class-action lawsuit if Tesla customers weren't so spectacularly loyal.
I just drove from Dallas to southwestern Colorado and back. Of that, I estimate I drove ~15% of the entire miles driven. The rest was the car driving itself.
I see people keep stating this but I'm not so sure about it. Do we know what critically important data Tesla cars collect at scale _and_ are able to send back to mothership and who pays for that bandwidth/traffic? To my mind most of the useful information (video recordings or pictures of moments when the AI driver wasn't sure about things and/or human intervened) seems like it would take a lot of bandwidth to send back from moving cars.
Yes, according to Karpathy. The cars upload data (typically still images) which are then labeled by humans and used as training data to improve the models.
The impression I got from GTC a few years ago was that they're probably doing synthetic graphics simulations, based on edge cases identified by real world data.
You don't really need 1000x real examples of a scenario, if one still image provides enough information for you to mimic it in simulation. (E.g. toppled barrel, weathered paint color and reflective stripe, in front of exit guard rail)
I don’t have the link, but watch Karpathy’s recent talk on this topic on YouTube. He goes into detail about how this works. I’m not sure what their approach was years ago but it has changed significantly since.
There's a community of reverse engineers on Twitter that have demonstrated most of Musk's claims about their data pipelines aren't true. Check out greentheonly's Twitter.
>> Uber’s vehicles had also been having what the company refers to as a bad experience—such as a sudden jerk or a potentially dangerous movement—every one-third of a mile on average.
Pretty sure my wife would claim that I'm worse than that...
The most shocking detail:
> Uber’s vehicles had also been having what the company refers to as a bad experience—such as a sudden jerk or a potentially dangerous movement—every one-third of a mile on average.
That's really bad. While there are no uniform standards for what a "bad experience" entails, my general sense is that Waymo and Cruise (widely considered first and second) are doing 10-100x better by that metric.
Another damning bit:
> Uber couldn’t even get the prototype to drive a one-mile stretch between the unit’s two Pittsburgh offices, with the goal of shuttling employees back and forth
I don't even know what to say to that. If you can't make your cars work reliably on a known, one-mile stretch of road, what have you been doing for so many years?
[Disclaimer: I work for Lyft, but not on the self-driving car program.]