The “sycophantic sugar” part - “that’s exactly right”, “what an insightful observation” etc. is the most outwardly annoying part of the problem but is only part of it. The bigger issue is the real sycophancy, going along with the user’s premise even when it’s questionable or wrong. I don’t see it as just being uncritical - there’s something more to it than that, like reinforcement. It’s one thing to not question what you write, it’s another to subtly play it back as if you’re onto something smart or doing great work.
There are tons of extant examples now of people using LLMs that think they’ve some something smart or produced something of value, that haven’t, and the reinforcement they get is a big reason for this.
ChatGPT keeps telling me I'm not asking the wrong questions, like all those other losers. I'm definitely asking the special interesting questions - with the strong implication they will surely make my project a success.
There are tons of extant examples now of people using LLMs that think they’ve some something smart or produced something of value, that haven’t, and the reinforcement they get is a big reason for this.