Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I wasn't thinking that there are no roles that can be automated, rather that there are some that can't be.

> Specifically in regards to adversarial input, humans are often the weakest link in terms of process security

I have yet to encounter a (healthy) person who looks at a photo of static and mistakes it for a cat.



How about a photo of a dress where some people say it's blue and some say it's gold?


Optical illusions indeed highlight limitations in human perception. However, the dress illusion seems to me far less of a problem than mistaking noise for an object.

More relevant however is that we humans can understand that we're faced with an optical illusion and we can make adjustments accordingly. We have formed the concept of an "optical illusion" and we just place "The dress" in that category. A machine needs to be specifically trained on adversarial examples in order to be able to predict them. Once you come up with a different class of adversarial examples it will continue to fail to detect them. There is no understanding there, just more and more refined pattern matching.

Does a machine that can match any pattern actually "understand"? I would say no. But these are already philosophical considerations :D


> More relevant however is that we humans can understand that we're faced with an optical illusion and we can make adjustments accordingly.

Broadly speaking, yes. At the same time that's not what happened in 2015. It produced so much polarizing content with people deeply entrenched in their believes. They might have recognized it as an optical illusion, but they refused to make adjustments.

> A machine needs to be specifically trained on adversarial examples in order to be able to predict them. Once you come up with a different class of adversarial examples it will continue to fail to detect them. There is no understanding there, just more and more refined pattern matching.

Moving away from image recognition examples, isn't that exactly what happens with humans predicting whether an email is a phishing attempt? I remember reading here on Hacker News this week about phishing tests at GitLab. It had a lot of comments about tests and training employees to spot adversarial emails. Some companies are more successful than the others. It is a complicated problem; otherwise we would have solved it already. But it's the same principle because phishers come up with different ways of tricking people. And some people will fail to detect them.


I would say that there are indeed many examples of things that are hard to categorize for humans. Sometimes there isn't even a way to categorize things perfectly (there may be a fuzzy boundary between categories). It is for example really hard to train people to figure out if a certain stock is going to be profitable or not - there are many other such examples.

This doesn't mean that the kind of thinking that goes on in the human mind is the same as the pattern-matching that goes on in an ANN (for example). Think about how ppl learn to talk. It's not like we expose infants to the wikipedia corpus and then test them on it repeatedly until they learn. There are structures in the brain that have a capacity for language - not a specific language, but language as an abstract thing. These structures are not the same as a pre-trained model.

The truth is I don't know enough about cognitive science to properly express what I'm thinking, but I'm pretty sure it's not just pattern matching :D


But what about if a person sees a cat but accidentally presses the dog button because they were distracted?

(To your point, though, I agree that machines can make strange errors, raising trust issues. My experience is that ML is useful in cases like recommendations or search results where a person can interact with predictions rather than being a complete replacement)


There is no button involved. If you look at a cat (that is within your field of vision in good lighting etc.) you will understand it's a cat. Without mistake. Most certainly you won't mistake it for a square full of static, or for a car or for smth else.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: