Most people do see productivity gains from using LLMs correctly. Myself included. Just because some people don’t learn how to use them correctly doesn’t mean LLMs aren’t helpful. It’s like when internet search came out and a handful of laggards tried it once, failed to get the exact perfect result, and declared “internet search is useless”. Using a tool wrong is not evidence of the tool being useless, it’s evidence that you need to learn how to use the tool.
hallucinations are literally the finger in the dam. if these models could sense when an output is well-founded and simply say “i dont know” otherwise… say goodbye to your job
Googling a question and finding an incorrect answer every now and then doesn’t mean that googling is useless. It means that you need to learn how to use google. Trust but verify. Use it for scenarios where you aren’t looking for it to be the trusted fact checker. It excels at brainstorming, not at fact giving.
How many times do you think I've heard that over the past three decades? And you know what? They've been right every time, except for this one little fact:
The machine cannot make you give a shit about the problem space.
It's a real issue! But only for people who built the habit of typing in address bar, clicking the first stack overflow link and copy paste the first answer. Maybe break that habit first?