What's your thesis here? I'm getting "shy, slightly socially awkward and very intelligent is not what autism is," and "people who are intelligent, knowledgeable, socially tuned and socially integrated claiming to be autistic must obviously be lying, autistics could not possibly be those things."
I wrote a confession to a pen pal once but the letter got lost in the mail. Now I refuse to use the postal service, have issues with French people and prefer local LLMs.
I pitched AGI to VC but the bills will be delivered. Now I need to find a new bagholder, squeeze, or angle because I'm having issues with delivery... something, something, prefer hype
If there's anything circa five dozen wannabe-techbro blogposts have taught me, it's that if you wait for a product that's worthy of shipping, you're never gonna ship.
Unless everybody is writing the same code to solve the same exact problems over and over again, by definition LLMs are solving novel problems every time somebody prompts them for code. Sure, the fundamental algorithms and data structures and dependencies would be the same, but they would be composed in novel ways to address unique use-cases, which describes approximately all of software engineering.
If you want to define "novel problems" as those requiring novel algorithms and data structures etc, well, how often do humans solve those in their day-to-day coding?
This goes back to how we define "novel problems." Is a dev building a typical CRUD webapp for some bespoke business purpose a "novel problem" or not? Reimplementing a well-known standard in a different language and infrastructure environment (e.g. https://github.com/cloudflare/workers-oauth-provider/)?
I'm probably just rephrasing what you mean, but LLMs are very good at applying standard techniques ("common solutions"?) to new use-cases. My take is, in many cases, these new use-cases are unique enough to be a "novel problem."
Otherwise, this pushes the definition of "novel problems" to something requiring entirely new techniques altogether. If so, I doubt if LLMs can solve these, but I am also pretty sure that 99.99999% of engineers cannot either.
Well, correctness(though not only correctness) sounds convincing, the most convincing even, and ought to be information-theory-wise cheaper to generate than a fabrication, I think.
So if this assumption holds, the current tech might have some ceiling left if we just continue to pour resources down the hole.
How do LLMs do on things that are common confusions? Do they specifically have to be trained against them?
I'm imagining a Monty Hall problem that isn't in the training set tripping them up the same way a full wine glass does