Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

We have a clue as to what it is (these are just functions at the end of the day) but don't know how the model's learned parameters relate to the problem domain. I saw a talk (maybe of Jeff Dean?) a while back that discussed creating models that could explain why certain features weighed more than others. Maybe with more approaches targeted towards understanding, these algorithms could start to seem less and less like a semantically opaque computational exercise, and more in line with how we humans think about things.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: