Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It has MANY flaws to be clear, and it's uncertain if those flaws can even be fixed, but it's definitely not "completely unusable".


It's weird watching people fixate the most boring unimaginative dead-end use of ChatGPT possible.

"Google queries suck these days", yeah they suck because the internet is full of garbage. Adding a slicker interface to it won't change that, and building one that's prone to hallucinating on top of an internet full of "psuedo-hallucinations" is an even worse idea.

-

ChatGPT's awe inspiring uses are in the category of "style transfer for knowledge". That's not asking ChatGPT to be a glorified search engine, but instead deriving novel content from the combination of hard information you provide, and soft direction that would be impossible for a search engine.

Stuff like describing a product you're building and then generating novel user stories. Then applying concepts like emotion "What 3 things my product annoy John" "How would Cara feel if the product replaced X with Y". In cases like that hallucinations are enabling a completely novel way of interacting with a computer. "John" doesn't exist, the product doesn't exist, but ChatGPT can model extremely authoritative statements about both while readily integrating whatever guardrails you want: "Imagine John actually doesn't mind #2, what's another thing about it that he and Cara might dislike based on their individual usecases"

Or more specifically to HN, providing code you already have and trying to shake out insights. The other day I had a late night and tried out a test: I intentionally wrote a feature in a childishly verbose way, then used ChatGPT to scale up and down on terseness. I can Google "how to shorten my code", but only something like ChatGPT could take actual hard code and scale it up or down readily like that. "Make this as short as possible", "Extract the code that does Y into a class for testability", "Make it slightly longer", "How can function X be more readable". 30 seconds and it had exactly what I would have written if I had spent 10 more minutes working on the architecture of that code

To me the current approach people are taking to ChatGPT and search feels like the definition of trying to hammer a nail with a wrench. Sure it might do a half acceptable job, but it's not going to show you what the wrench can do.


I think ChatGPT is good for replacing certain kinds of searches, even if it's not suitable as a full-on search replacement.

For me it's been useful for taking highly fragmented and hard-to-track-down documentation for libraries and synthesizing it into a coherent whole. It doesn't get everything right all the time even for this use case, but even the 80-90% it does get right is a massive time saver and probably surfaced bits of information I wouldn't have happened across otherwise.


I mean I'm totally onboard if people are go with the mentality of "I search hard to find stuff and accept 80-90%"

The problem is suddenly most of what ChatGPT can do is getting drowned out by "I asked for this incredibly easy Google search and got nonsense" because the general public is not willing to accept 80-90% on what they imagine to be very obvious searches.

The way things are going if there's even a 5% chance of asking it a simple factual question and getting a hallucination, all the oxygen in the room is going to go towards "I asked ChatGPT and easy question and it tried to gaslight me!"

-

It makes me pessimistic because the exact mechanism that makes it so bad at simple searches is what makes it powerful at other usecases, so one will generally suffer for the other.

I know there was recently a paper on getting LMs to use tools (for example, instead of trying to solve math using LM, the LM would recognize a formula and fetch a result from a calculator), maybe something like that will be the salvation here: Maybe the same way we currently get "I am a language model..." guardrails, they'll train ChatGPT on what are strictly factual requests and fall back to Google Insights style quoting of specific resources


> 80-90% it gets right

Worse is really better, huh.


In this context, anyway. 80-90% of what ChatGPT dregs up is being correct is better than 100% of what I find “manually” being correct because I’m not spelunking all the nooks and crannies of the web that ChatGPT is, and so I’m not pulling anywhere near the volume that ChatGPT is.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: