ChatGPT does not “write well” unless your standard is some set of statistical distributions for vocabulary, sentence length, phrase structure, …
Writing well is about communicating ideas effectively to other humans. To be fair, throughout linguistic history it was easier to appeal to an audience’s innate sense of authority by “sounding smart”. Actually being smart in using the written word to hone the sharpness of a penetrating idea is not particularly evident in LLM’s to date.
If you're using it to write in programming language, you often actually get something that runs (provided your specifications are good - or your instructions for writing the specifications are specific enough!) .
If you're asking for natural language output ... yeah... you need to watch it like a hawk by hand - sure. It'd be nice if there was some way to test-suite natural language writing.
The last time I asked it to write something in a programming language, it put together a class that seemed reasonable at first blush, but after review found it did not do what it was supposed to do.
The tests were even worse. They exercised the code, tossed the result, then essentially asserted that true was equal to true.
When I told it what was wrong and how to fix it, it instead introduced some superfluous public properties and a few new defects without correcting the original mistake.
The only code I would trust today's agents with is so simple I don't want or need an agent to write it.
I think it depends on what models you are using and what you're asking them to do, and whether that's actually inside their technical abilities. There are not always good manuals for this.
My last experience: I asked claude to code-read for me, and it dug out some really obscure bugs in old Siemens Structured Text source code .
A friend's last experience: they had an agent write an entire Christmas-themed adventure game from scratch (that ran perfectly).
Like most other tools, it can take some experience to become good at using them. What you’re describing suggests a lack of that, assuming you used a good coding model or reasonably recent frontier model.
Writing well is about communicating ideas effectively to other humans. To be fair, throughout linguistic history it was easier to appeal to an audience’s innate sense of authority by “sounding smart”. Actually being smart in using the written word to hone the sharpness of a penetrating idea is not particularly evident in LLM’s to date.