Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The aid to explainability seems at least somewhat compelling. Understanding what a random forest did isn't always easy. And if what you want isn't the model but the closed form of what the model does, this could be quite useful. When those hundred input dimensions interact nonlinearly in a million ways thats nice. Or more likely I'd use it when I don't want to find a pencil to derive the closed form of what I'm trying to do.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: