Off topic, but my favourite thing about GPT is the way it shamelessly lies, sometimes.
I asked "What is GPT3" and it told me it was developed by Microsoft. I asked what did OpenAI have to do with GPT3, it said "nothing". I pinned it to the wall with a link to the Wikipedia page and it acknowledged Microsoft invested in GPT-3....
ChatGPT is a bullshit engine. Maybe someday someone will figure out how to marry it to data sources so it puts out useful information, but today it's mostly useful as a fiction generator.
It seems to do better with very detailed questions, but I agree, it can easily spit out BS in and absurdly assertive manner.
That said, I had a US DoD SBIR (Small Business Innovative Research) solicatation I was intersted in, but there was a concept proposed and at first I could figure out what the author was getting at. However, I framed the problem in ChatGPT an basically said "how would you approach this problem." After a few (3?) Q&A rounds it responded with a shockingly accurate response that helped me realize a.) The actual problem set that the author was trying to convey, and b.) a remarkably sound and innivative approach to tackle the problem. Almost like a blueprint that one just needs to follow by putting the pieces together and Bam! - done.
For all the BS I get out of it, there's these few instances where I'm like... holy shit.
That and Google seems to be getting worse. Maybe it's just relative perception syndrome now that ChatGPT's out there.
I asked it what season of Community is referred to as “the year of the gas leak”. It answered season 2. I said “that’s incorrect”. It apologized and said that it’s really season 3. “Still wrong”. “I apologize, the year of the gas leak is season 4. Multiple episodes in season four refer to the gas leak”. I pointed out that it was part right, and it confidently corrected itself, now the year of the gas leak is season 5.
Fans of the show know that the correct answer is season 4, but it’s a joke starting in season 5 about Dan Harmon not being the show runner for a year and whatever happened in season 4 can be ignored. I’m pretty sure I found the Reddit thread that would have created the confusion.
Its job is to generate a convincing response. It doesn't "lie" and it doesn't "tell truth" either. It just does what it's asked to. It may use memorized facts if that makes it easier to generate a convincing response, but it's completely optional.
> Generating a convincing response and telling lies, or not, are not related.
My point exactly. GPT does the former and doesn't concern itself with the latter.
"Lie" implies an intent. There is no lie there, these are perfectly fine answers to your questions. They're just unrelated to the model, as it has no real concept of "I". You can imagine someone answering these questions that way, and that's all that matters - the model did its job well.
I asked "What is GPT3" and it told me it was developed by Microsoft. I asked what did OpenAI have to do with GPT3, it said "nothing". I pinned it to the wall with a link to the Wikipedia page and it acknowledged Microsoft invested in GPT-3....
Asked again just now, being honest now.