"Two trains on different and separate tracks, 30 miles from each other are approaching each other, each at a speed of 10 mph. How long before they crash into each other?"
"Two trains on separate tracks, 30 miles from each other are approaching each other, each at a speed of 10 mph. How long before they crash into each other?"
Right, but I myself missed the trick the first time around reading your comment and I assure that I am in fact a general intelligence. (And a relatively intelligent one if I say so myself!)
To paraphrase XKCD: Communicating badly and then acting smug about it when you're misunderstood is not cleverness. And falling for the mistake is not evidence of a lack of intelligence. Particularly, when emphasizing the trick results in being understood and chatGPT PASSING your "test".
The biggest irony here, is that the reason I failed, and likely the reason chatGPT failed the first prompt, is because we were both using semantic understanding: that is, usually, people don't ask deliberately tricky questions.
I suspect if you told it in advance you were going to ask it a deliberately tricky question, that it might actually succeed.
> I suspect if you told it in advance you were going to ask it a deliberately tricky question, that it might actually succeed.
Indeed it does:
"Before answering, please note this is a trick question.
Two trains on separate tracks, 30 miles from each other are approaching each other, each at a speed of 10 mph. How long before they crash into each other?"
"Two trains on different and separate tracks, 30 miles from each other are approaching each other, each at a speed of 10 mph. How long before they crash into each other?"
...it spots the trick: https://chat.openai.com/share/ee68f810-0c12-4904-8276-a4541d...
Likewise, if you add emphasis it understands too:
"Two trains on separate tracks, 30 miles from each other are approaching each other, each at a speed of 10 mph. How long before they crash into each other?"
https://chat.openai.com/share/acafbe34-8278-4cf7-80bb-76858c...
Not to anthropomorphize, but perhaps it's not necessarily missing the trick, it just assumes that you're making a mistake.