Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I like that this thread points out even humans have difficulty with that construction sometimes. We're trying to hold Google's language model to a higher standard than humans in this case I think. I remember "learning" thousands of bits of trivia like that from people who had misinterpreted something they read and misstated it in such a way.

Of course Google has already been putting often-incorrect summaries/factoids in its search infoboxes for a few years now.



It's not a matter of "its vs it's" in this case, but the very existence of the word in the sentence:

"NASA’s Webb Takes Its First-Ever Direct Image of Distant World"

It doesn't matter if one misspells its. You know what it means and it largely defines this sentence. A failure to parse such a relatively simple construct doesn't bode well.


You're of course correct, but perhaps I should have focused more on the concept of skipping words when reading, misremembering, and "reading what you want to read" - those traits are extremely common if not universal at some level in human readers as well.


Well yeah, when I make a tool I want it to do its job correctly. If it doesn't I throw it out. If a human keeps messing up I do the same. A human messing up confidently in their interview probably won't even get hired...


Sure but this seems analogous to creating a claw hammer then showing off how it can be used to drive screws, then saying the hammer isn't doing its job correctly when the screws aren't driven properly.

I think chat technology like this is an incredible tool, but I don't think it's being judged fairly: I don't think its usefulness is as some kind of oracle or advisor expected to provide correct or logical information. That seems so orthogonal (if not diametrically opposed) from its actual function that it really feels like we're being trolled by things like "Galactica". But I'm much more (cautiously) optimistic about the potential use for the technology in web search, which has never been logical or "correct" and has always required critical thinking on the part of its users.

Perhaps there should be more of a disclaimer that the things it says are not and cannot be construed to be factual, no matter how verisimilitudinous.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: