Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yes, they'll probably not go away, but it's got to be possible to handle them better.

Gemini (the app) has a "mitigation" feature where it tries to to Google searches to support its statements. That doesn't currently work properly in my experience.

It also seems to be doing something where it adds references to statements (With a separate model? With a second pass over the output? Not sure how that works.). That works well where it adds them, but it often doesn't do it.





Doubt it. I suspect it’s fundamentally not possible in the spirit you intend it.

Reality is perfectly fine with deception and inaccuracy. For language to magically be self constraining enough to only make verified statements is… impossible.


Take a look at the new experimental AI mode in Google scholar, it's going in the right direction.

It might be true that a fundamental solution to this issue is not possible without a major breakthrough, but I'm sure you can get pretty far with better tooling that surfaces relevant sources, and that would make a huge difference.


So lets run it through the rubric test -

What’s your level of expertise in this domain or subject? How did you use it? What were your results?

It’s basically gauging expertise vs usage to pin down the variance that seems endemic to LLM utility anecdotes/examples. For code examples I also ask which language was used, the submitters familiarity with the language, their seniority/experience and familiarity with the domain.


A lot of words to call me stupid ;) You seem to have put me in some convenient mental box of yours, I don't know which one.

Oh heck no! Definitely no!

I am genuinely asking, because I think one of the biggest determinants of utility obtained from LLMs is the operator.

Damn, I didn’t consider that it could be read that way. I am sorry for how it came across.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: