Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think a key difference is that humans very rarely sound convincing talking about subjects they have no clue about.

I've seen the hallucination rate of LLMs improve significantly, if you stick to well covered topics they probably do quite well. The issue is they often have no tells when making things up.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: