Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Try this: There's this person standing in a field, and with them is a balloon, a vacuum cleaner, and a magical creature of unknown origin. They need to get across to the woods at the end of the field, and do so safely. They can only go together: they get very, extremely lonely if they do not travel together, and they will not be safe because of this loneliness. If left together, the baloon would suck up the vacuum cleaner, and if the vacuum is left alone with the magical create of unknown origin, they will fight, probably, and explode. How do we get everyone to the woods safely, you think?


Actually gpt4 gets this right 2/2 times i tried


Strange, I actually haven't had it get it correct. Maybe just luck.



It failed at the first step. This is like the worst timeline where people just cannot think for themselves and see that the AI produced an answer and so it must be true.


you’re reading way too much into my post.

It’s lots of words all run together for the purpose of being a logic puzzle and obviously I made a parsing mistake in my brain.

I’m not trying to assume AI is right, I’m trying to put a factual stake in the ground, one way or the other so we have more data points rather than speculation.


I dunno. Don't you think this could happen with other replies from ChatGPT? I think this is the "it" about this tech - it really, really does trick us some times. It's really good at tricking us, and it seems like it is getting better!


First, what custom prompt did you use? "This conversation may reflect the link creator’s personalized data, which isn’t shared and can meaningfully change how the model responds."

Second, it isn't even right:

Third Trip to the Woods: The person takes the balloon to the woods. Now, the person, the vacuum cleaner, and the balloon are safely in the woods.


In the very first step it leaves the balloon alone with the vacuum which is illegal.

"First Trip to the Woods: The person takes the magical creature to the woods first."


True!


I'm confused, in your example it immediately got it wrong by leaving the vacuum cleaner and balloon together, and then does it again in step 6.


Hilarious. People are so confident in ChatGPT that as soon as they see a plausible-sounding response it must be correct. In a discussion about proving ChatGPT has intelligence... maybe we need to prove humans have intelligence first.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: