Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Language is fuzzy in exactly the same way.

No. Language can be fuzzy, yes, but not at all in the same way. I have just explained that.

> LLMs can create factually correct responses in dozens of languages using endless variations in phrasing.

So which is it? Is it about good prompting, or can you have endless variations? You can’t have of both ways.

> You fixate on the kind of questions that current language models struggle with

So you’re saying LLMs struggle with simple factual and verifiable questions? Because that’s all the example questions were. If they can’t handle that (and they do it poorly, I agree), what’s the point?

By the way, that’s a single example. I have many more and you can find plenty of others online. Do you also think the Gemini ridiculous answers like putting glue on pizza are about bad promoting?

> You think the probabilistic nature of language models is a fundamental problem that puts a ceiling on how smart they can become, but you're wrong.

One of your mistakes is thinking you know what I think. You’re engaging with a preconceived notion you formed in your head instead of the argument.

And LLMs aren’t smart, because they don’t think. They are an impressive trick for sure, but that does not imply cleverness on their part.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: