I think a key difference is that humans very rarely sound convincing talking about subjects they have no clue about.
I've seen the hallucination rate of LLMs improve significantly, if you stick to well covered topics they probably do quite well. The issue is they often have no tells when making things up.
I've seen the hallucination rate of LLMs improve significantly, if you stick to well covered topics they probably do quite well. The issue is they often have no tells when making things up.