> If you ask it a question where the training data (or input data = context) either didn't include the answer, or where it was not obvious how to get the right answer, that will not (unfortunately) stop it from confidently answering!
I haven't found this to be the case in my experience. I use ChatGPT-4. It often tells me when it doesn't know or have enough information.
If you haven't used GPT-4 I recommend signing up for a month. It is next level, way better than 3.5. (10x the parameter count). (No I'm not being paid to recommend it.)
I haven't found this to be the case in my experience. I use ChatGPT-4. It often tells me when it doesn't know or have enough information.
If you haven't used GPT-4 I recommend signing up for a month. It is next level, way better than 3.5. (10x the parameter count). (No I'm not being paid to recommend it.)