> still creates very good abstractions and classifiers.
My point is that "good" and "bad" are not objective here, but depend on human use-cases.
Now to be clear: I'm not disagreeing with you! These are good abstractions, for humans. It lets us communicate concepts easily, which is great! But it might not be the best abstraction in every circumstance.
For example, I recall reading an article that said that AI is better at spotting breast cancer from photos (which is essentially interpreting abstract blobs as cancer or not). The main reason seems to be that it is not held back by the human biases in perception.
My point is that "good" and "bad" are not objective here, but depend on human use-cases.
Now to be clear: I'm not disagreeing with you! These are good abstractions, for humans. It lets us communicate concepts easily, which is great! But it might not be the best abstraction in every circumstance.
For example, I recall reading an article that said that AI is better at spotting breast cancer from photos (which is essentially interpreting abstract blobs as cancer or not). The main reason seems to be that it is not held back by the human biases in perception.