Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

andreasvc: AFAIK, there is no program in existence today that can successfully model "human agency" (as you put it). Wouldn't that require major breakthroughs in AI?

And my understanding from chatting with friends in the fraud-detection space is that, while current state-of-the-art machine-learning systems can successfully adapt to the data they obtain from users, they cannot adapt to users learning to game or 'route around' the system -- at least not without programmer intervention 'from above.'

The link to Gödel and Turing I saw is that solving this problem without intervention 'from above' would require a computer program that can successfully model itself as it interacts with humans, but then we run into those two guys, no?



Yes I see the superficial resemblance with Gödel and Turing, but it's not more than that. The reason I insist on that is because the value of their theorems lies in the fact that they have been mathematically proven, and the proof only holds in very particular conditions. Basically, a system that is strong enough to prove statements about arithmetic cannot prove its own consistency. This hypothesis about the difficulty of certain machine learning tasks is a conjecture, at best. I don't think you could prove it, and if you could, it would look very different from the incompleteness proof. I think it has to do with certain AI problems being hard, but this is a rather vague notion; perhaps we simply lack certain concepts or mathematical tools. The important thing about the incompleteness proofs is that that is completely ruled out: given the right formal conditions, certain things are absolutely impossible to do.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: