Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

And artificial strength isn't muscle attached to ligaments, yet factory machines outperform humans.

We should be defining intelligence by outputs rather than the methodology by which they achieve them, or else the only acceptable definition for intelligence will become "a human brain".

If you choose the former, you can comfortably say airplanes don't fly, they just glide, if you want to define flight as flapping of wings. Yet airplane flight is more efficient than if we tried to achieve it the biological way.

Similarly, it seems apparent we'll be able to solve many tasks that we attribute to "intelligence" using methods different than the biological ones. We shouldn't limit ourselves to comparing the methodology when deciding if it has achieved the goal, it is more productive, from what I've seen, to define by output, and to increase specificity there if the result is still unsatisfying.



When people write text they use it to represent ideas and objects and relations between ideas and objects. When GPT writes text it fits a satistical model. It has no concept of meaning. Although the result may be superficially similar sometimes, under the hood the two processes are nothing alike. GPT does not think. Humans do. At least sometimes.

Thinking is an internal process. It is not just determined by its output. When we roll dice we do not say the dice decided to land on three and seven.


>When people write text they use it to represent ideas and objects and relations between ideas and objects.

That's not true, at least a lot of the time. Watch an episode of Paw Patrol, go see the latest Marvel movie or listen to number 6 or 7 on the top 40 charts.

A huge amount of what we consider "creative" endeavours that only a human can produce have nothing to do with meaning or experience and are simply a group of people trying to create something that "sounds right" or looks cool, or in the case of a lot of kid's content, merely inoffensive to the largest segment of society possible.

I'm not saying that GPT is going to win the Pulitzer Prize any time soon, but the line between "human text has meaning, machine text doesn't" is quite blurry and process-dependent.


It’s also worth considering the scope of intended and realised meanings in a work, and whether or not we’d consider that dynamic as one worth pursuing in the future.

Kind of like the old joke about English teachers looking too deeply into classic literature, isn’t it? I might not intend to relate and extend layers of context with the line, “John sat squarely on the desk,” but a reader observing that line as part of a surrounding text could infer meaning and purpose beyond what I originally intended.

With AI-generated text, that dance of writers filling (or deliberately avoiding) their work with meaning, and readers lifting the meaning out of the work, becomes much less human in and of itself. Or does it? Will schools of the future choose AI-written stories and film for students to learn from, over material written by people?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: