> But once you do that, it gets tricky to determine when you should ignore the "stationary object" and when you shouldn't.
That's not tricky. You never ignore it. A stationary object means that you will have to either stop or turn. You don't know which one yet, but you definitely know you'll have to take an action. If you haven't decided by some close point, you just pick one.
As I understand it, what's hard is determining whether that stationary object is directly in front, or off to the side. Cars are always passing stationary objects on the sides.
As I understand it, Tesla autopilot can't distinguish objects on the sides (such as road signs) from objects directly in front. And it must ignore road signs etc. So it ends up ignoring all stationary objects.
"Ignore" was probably the wrong word, but it is tricky. Remember that a vehicle is evaluating the environmental state at a given point in time to make decisions. Let's take the example of everyday driving on the freeway where you are following a car going the same speed as you. From your car's radar and camera perspective, the vehicle in front of you is stationary. It's not until you combine that information with additional information (your own velocity) and infer a more complicated state of affairs that you get a more complete picture. The camera and radar are both screaming "there's a stationary object right in front of us!", but you're still traveling at 60mph directly toward it.
Combine that with the normal event of a garage wall or building directly in front of you while parking and we have three different "stationary object in front" situations that all have to be handled differently. In all three, the radar and camera are saying "stationary object ahead". But in the left-turn-with-wall situation you need to turn instead of stop. In the following-another-car situation you do nothing, and in the parking situation you need to stop instead of turn. And that's all without even starting to account for things like rain (opaque to some LIDAR and cameras = "we're literally crashing right now!" while radar reports nothing amiss). In order to decide what to do you have to pull in additional environment information, but then that's going to have similar caveats and conflicting reports to account for, so you need additional rules and information, etc. and now you've snowballed into a highly complex model with corner cases and potential bugs everywhere.
If it was easy we would have had self-driving cars a decade ago.
Don't make it sound like subtracting your own velocity is anything but trivial. Your whole first paragraph is playing semantics about the definition of the word "stationary". If the camera says you have zero relative velocity, that is not it saying "stationary", and you are not ignoring a "stationary" object.
> Combine that with the normal event of a garage wall or building directly in front of you while parking and we have three different "stationary object in front" situations that all have to be handled differently. In all three, the radar and camera are saying "stationary object ahead". But in the left-turn-with-wall situation you need to turn instead of stop. In the following-another-car situation you do nothing, and in the parking situation you need to stop instead of turn.
If the object is stationary, you turn-or-stop. If it's another car, it's really easy to measure that it's not stationary. If your sensors can't figure out the distance or velocity for multiple seconds in a row, your equipment should not be allowed to drive.
> highly complex model with corner cases and potential bugs everywhere
That's all in object detection, though. And none of that noise looks at all like a solid object at a specific location.
> If it was easy we would have had self-driving cars a decade ago.
I disagree. Even if you have a system that competently avoids stationary-object collisions, you're nowhere near a full self-driving car.
And while the overall problem of avoiding collisions has hard parts, none of the hard parts are in the "What do we do about a stationary object?" logic.
Subtracting your own velocity from what? I've been talking about individual sensor reporting with conflicting information and the complexity of decision making that results. A radar return from a rear bumper at relative zero miles an hour is exactly the same whether the car is actually in motion or not. Reference frame and velocity relative to the ground is up to your software algorithms to determine, not the sensors, and I've been trying to explain why that's non-trivial and "stationary" is always relative to something. You seem to be treating the entire system as an always-in-consensus whole, which it isn't.
I feel like we're just going in circles here. I could keep giving you examples of cases where sensors will report conflicting information that makes decisions like turn or stop difficult or false-positive prone, and you'll keep insisting that it's just object recognition and you should always know to turn or stop. I'll stop here and stand by my original point - sensor limitations and immature algorithms are the reason for issues like the ones Autopilot has; completely preventing these issues is not straight forward.
You said the camera/radar would be reading another car as "stationary", which is a velocity number of zero. Add/subtract your speed and you get the real speed.
A non-defective sensor suite is going to give you either distance or velocity, and you can use distance measurements to calculate velocity really easily.
> A radar return from a rear bumper at relative zero miles an hour is exactly the same whether the car is actually in motion or not.
Doppler effect.
> "stationary" is always relative to something
Once you have a velocity number, no matter what it's relative to, it's trivial to convert it to any reference frame you want.
> I feel like we're just going in circles here. I could keep giving you examples of cases where sensors will report conflicting information
That's because such things are irrelevant to my argument. I'm only talking about how algorithms should handle objects that have already been found.
> I'll stop here and stand by my original point - sensor limitations and immature algorithms are the reason for issues like the ones Autopilot has; completely preventing these issues is not straight forward.
I fundamentally disagree that the navigation algorithms are difficult here. Sensor issues are huge, but your original scenario was based on the sensors working and navigation failing. That specific kind of navigation failure is utterly inexcusable. It is actually easy to force a car to either slow down or turn to avoid a known obstacle. That specific part should never ever ever fail. It doesn't matter how hard other parts are.
Maybe I'm not being clear on my core argument. sensors give conflicting information in different scenarios. One sensor might read an obstacle to be avoided while another shows a clear path. Deciding what to do in these conflict situations is difficult. Saying it's easy is effectively saying that the hundreds of very intelligent engineers working on these systems are idiots because it looks like it should be a piece of cake. You're using similar logic to non-technical people who get angry when a developer can't give an accurate estimate of time needed to implement a new feature - (s)he has the code and requirements right there, it should be easy, right?
I'm quite familiar with how vector math works. Your comments above seem to still be missing that I am talking about situations from the perspective of individual sensors, which have no ability to make judgments about different frames of reference. That only applies at the level of the full driving system. For example, to a radar sensor alone, there is no difference between a car parked 10ft in front of you while you are motionless relative to the ground and a car 10ft in front of you while you are both moving at 60mph relative to the ground. Both are "stationary" as far as the return signature is concerned. There will be no doppler shift in either case. The driving system has to combine this with other sources of information for it to be useful, and that's where problems creep in.
> That's because such things are irrelevant to my argument. I'm only talking about how algorithms should handle objects that have already been found.
We're arguing two perspectives of the same higher-level opinion (algorithms are insufficiently developed to be safe enough for autonomous driving). What I'm trying to say is that there are a number of fuzzy steps to get from sensor readings to actual object detected, and then from actual object detected to "known obstacle" classification, as you put it. I don't think I'm going to be able to argue this case clearly enough in the comments here, so I'm going to add this to my longer article writing list. Thank you for the constructive debate on this.
> Saying it's easy is effectively saying that the hundreds of very intelligent engineers working on these systems are idiots because it looks like it should be a piece of cake.
Saying one very very specific thing is easy is not the same as calling engineers idiots. It's bad to conflate "the system is hard" with "every single piece of the system is hard"
> The driving system has to combine this with other sources of information for it to be useful, and that's where problems creep in.
There are many places where integrating information can let errors creep in.
The car adding its own ground velocity in is just... not one of them.
> What I'm trying to say is that there are a number of fuzzy steps to get from sensor readings to actual object detected, and then from actual object detected to "known obstacle" classification, as you put it.
Agreed. But while some steps are fuzzy, some are easy. The chain from start to finish is fragile and difficult. But some of the individual links are rock-solid.
> I'm going to add this to my longer article writing list. Thank you for the constructive debate on this.
I'm not sure if we really got anywhere productive here, but good luck!
There's some confusion about what "stationary" means. I was using the external reference frame. So I called road signs etc stationary. But in the vehicle's reference frame, road signs etc have closing velocity.
Vehicles in front moving at the same speed aren't stationary in the external reference frame. They're stationary in the vehicle's reference frame. If one stops, though, it becomes stationary in the external reference frame, and has closing velocity in the vehicle's reference frame.
You're not being pedantic - it's an important distinction. That's why I've been putting "stationary" in quotes the whole time. An individual sensor has no concept of a reference frame, only the broader ADAS/autonomous system does. That's why acting on object detection is non-trivial - individual sensors report "object" with no context and it's up to fallible algorithms to make sense of often conflicting information.
But then the car will have a lot of false positives and it will stop now and then for every mundane thing that flies before the sensors. Every one of your customers ll be pissed, because the car cannot differentiate between debris on the road and a wall..Showing the true limitation the thing they call "Autopilot"...
Tesla seems to have figured that, It is better to kill some of your customers then pissing every single one of them off...
If the car has the slightest ability to follow lanes, it will choose "turn" and not stop until it gets hopelessly lost. At which point it only stops because it was avoiding certain disaster.
The dichotomy between ignore and stop is a false one.
That's not tricky. You never ignore it. A stationary object means that you will have to either stop or turn. You don't know which one yet, but you definitely know you'll have to take an action. If you haven't decided by some close point, you just pick one.