Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Which is wrong, no? Trains on separate tracks don't crash into each other.

That trips up a significant portion of humans too though



Inconclusive. The model includes a disclaimer: "(or crash into each other, as stated in the question)." LLMs often take a detour and spill their guts without answering the actual question. Here's a hint suggesting that user input influences the internal world representation much more significantly than one might expect.


That disclaimer is only there with GPT4-turbo. I assume I could experiment for a while and find something similar that trips it up fully.


Assuming the trains aren't wide enough to collide even if they are on separate tracks


Which would be quite unusual for normal trains. That being said, the question implies that they will crash into each other, so you could argue that this is a valid assumption anyway.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: