Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Do you do any work with uncertainty estimation with neural networks? There are many different ways to estimate uncertainty depending on your application, such as ensemble-training or dropout during testing to produce ranges of predictions that you can then use to get boundable error measurements.


That sort of misses the point. The uncertainty estimation is, itself, dependent on and a direct result of the quality of the training data you give it and that training data's representation of reality.

And even if the inputs are great, it's hard to understand what we can do with a 99% confidence prediction. The next great unlock that society is waiting for in the AI realm applies to industries where failure is not an easily acceptable outcome.

Take self-driving cars for example. Even if we can objectively prove that current AI models can drive on current roads in current conditions with a lower fatality rate than humans (this is debatable anyway, but let's assume) - what do we do when it knows it's not confident? If we assume the driver hasn't been paying attention during the ride so far, and a scenario comes up that the AI is uncertain about...the human now has likely mere seconds (at most) to capture their surroundings, analyze the risk, and take corrective action. If we assume the driver has been paying attention during the ride, then what was the point of the AI? Moreover: what if it thought it was confident, but it was still wrong. Who do we blame? How do we mitigate future instances of it? What were once societal problems we could blame on a fallible humans are now obscure technology problems we can't introspect. That's a scary place for a lot of people to be.

Basically, we've reached the level of AI where it can be used as a backup to humans and protect us from royally fucking something up. But the next big unlock will come when it can be the primary actor. It's hard to imagine how we'll get there without the AI actually understanding what it's processing in some real way.

Edit: rephrased the 3rd paragraph.


I agree, the question of how to deal with high uncertainties is definitely task dependent. In your example of a self-driving car the answer is not clear. However, for many other tasks its entirely reasonable to have the AI make decisions and then pop it out to human review upon high uncertainty, or just not act on it. In tasks like this AI can be the primary task-doer with the human as the backup.


The problem with this is that NN are in general, overconfident in their predictions, even when they are wrong. This is a well known problem in the AI/ML literature. Using ensembles of overconfident predictors is not the same as getting an unbiased estimate of the uncertainty.


Yes, definitely. Though there has been some interesting papers recently on this using different methods to approximate bayesian posteriors in NN's. One of the most recent ones (Mi et al., 2019; https://arxiv.org/abs/1910.04858) that benchmarks a few different methods -- infer-dropout, infer-transformation, and infer-noise -- are all promising for different applications and neural network models (black, 'grey', 'white' box).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: