They’re making fun of your typo, but you’re right. Pretty much every software job in 5 years will be an AI job. This rustles a lot of feathers, but ignoring the truth will only hurt your career.
I think the era of big tech paying fat stacks to a rather larger number of technical staff will start to wane as well. Better hope you have top AI paper publications and deep experience with all parts of using LLMs/whatever future models there are, because if not, you’ll be in for a world of pain if you got used to cushy tech work and think it’s inevitable in a world where AI is advancing so fast.
Have you ever worked in tech and had to deal with the typical illiteracy and incompetence of management and execs?
If LLMs got this good, the brick wall these orgs will hit is what will really ruffle feathers. Leadership will have to be replaced by their technical workers in order for the company to continue existing. There's simply not enough information in the very high level plain english requirements they're used to thinking about. From a theoretical and practical perspective, you very likely cannot feed that half-assed junk to any LLM no matter how advanced and expect useful results. This is very much already the case human-to-human for all of history.
Either that or nothing happens, which is the current state of things. Writing code is not even 10% of the job.
I feel that issue with AI is similar to issues with AI cars.
AI car won't ever reach its destination in my city. Because you need to actively break the rules few times if you want to drive to the destination. There's a stream of cars and you need to merge into it. You don't have an advantage, so you need to wait until this stream of cars will end. However you can wait for that for hours. In reality you act aggressively and someone will allow you to join. AI will not do that. Every driver does that all the time.
So when AI will try to integrate into human society, it'll hit the same issues. You sent mail to manager and this mail got lost because manager does not feel like answering it. You need to seek him, you need to face him and ask your question, so he has nowhere to run. AI does not have physical presence, neither he have aggression necessary for this. He'll just helplessly send emails around, moving right into spam.
There's gonna be a lot of context implied in project docs based on previous projects and the LLM won't ask hard questions back to management during the planning process. It will just happily return naive answers from its Turing tarpit.
No offense intended to anyone, but we already see this when there are other communication problems due to language barrier or too many people in a big game of corporate telephone. An LLM necessarily makes that problem worse.
Previous projects can be fed into LLMs either by context window (those are getting huge now) or fine tuning… but of course it's not a magic wand like some expect it to be.
People keep being disappointed it's not as smart as a human, but everyone should look how broad it is and ask themselves: if it were as good as a human, why would companies still want to employ you? What skills do you have which can't be described adequately in writing?
LLMs are cool and will continue to change society in ways we cannot readily predict, but they are not quite that cool. GPT3 has been around for a little bit now and the world has not ended or encountered a singularity. The models are expensive to run both in compute and expertise. They produce a lot of garbage.
I see the threat right now to low-paid writing gigs. I’m sure there’s a whole stratum of those they have wiped out, but I also know real live humans still doing that kind of work.
What developers may use in five years is a better version of Copilot trained on existing code bases. They will let developers do more in the time they have, not replace them. Open source software has not put us all out of jobs. I foresee the waning of Big Tech for other reasons.
> GPT3 has been around for a little bit now and the world has not ended or encountered a singularity.
And they won't right up until they do. Reason why is that…
> The models are expensive to run both in compute and expertise.
…doesn't extend to the one cost that matters: money.
Imagine a future AI that beats graduates and not just students. If it costs as much per line of code as 1000 gpt-4-1106-preview[0] tokens, the cost of rewriting all of Red Hat Linux 7.1 from scratch[1] is less than 1 million USD.
I like financial breakdowns like this. The thing an LLM cannot do is all the decision making that went into that. Framing the problem is harder to quantify, and is almost certainly an order of magnitude more work than writing and debugging the code. But a sufficiently good LLM should be able to produce code cheaper than humans. Maybe with time and outside sources of truth, better.
I think the era of big tech paying fat stacks to a rather larger number of technical staff will start to wane as well. Better hope you have top AI paper publications and deep experience with all parts of using LLMs/whatever future models there are, because if not, you’ll be in for a world of pain if you got used to cushy tech work and think it’s inevitable in a world where AI is advancing so fast.