Exactly, and main reason I've stopped using GPT for serious work. LLMs start to break down and inject garbage at the end, and usually my prompt is abandoned before the work is complete, and I fix it up manually after.
GPT stores the incomplete chat and treats it as truth in memory. And it's very difficult to get it to un-learn something that's wrong. You have to layer new context on top of the bad information and it can sometimes run with the wrong knowledge even when corrected.
Reminds me of one time asking ChatGPT (months ago now) to create a team logo with a team name. Now anytime I bring up something it asks me if it has to do with that team name. That team name wasn’t even chosen. It was one prompt. One time. Sigh.
GPT stores the incomplete chat and treats it as truth in memory. And it's very difficult to get it to un-learn something that's wrong. You have to layer new context on top of the bad information and it can sometimes run with the wrong knowledge even when corrected.