Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Inconclusive. The model includes a disclaimer: "(or crash into each other, as stated in the question)." LLMs often take a detour and spill their guts without answering the actual question. Here's a hint suggesting that user input influences the internal world representation much more significantly than one might expect.


That disclaimer is only there with GPT4-turbo. I assume I could experiment for a while and find something similar that trips it up fully.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: