> GPT3 has been around for a little bit now and the world has not ended or encountered a singularity.
And they won't right up until they do. Reason why is that…
> The models are expensive to run both in compute and expertise.
…doesn't extend to the one cost that matters: money.
Imagine a future AI that beats graduates and not just students. If it costs as much per line of code as 1000 gpt-4-1106-preview[0] tokens, the cost of rewriting all of Red Hat Linux 7.1 from scratch[1] is less than 1 million USD.
I like financial breakdowns like this. The thing an LLM cannot do is all the decision making that went into that. Framing the problem is harder to quantify, and is almost certainly an order of magnitude more work than writing and debugging the code. But a sufficiently good LLM should be able to produce code cheaper than humans. Maybe with time and outside sources of truth, better.
And they won't right up until they do. Reason why is that…
> The models are expensive to run both in compute and expertise.
…doesn't extend to the one cost that matters: money.
Imagine a future AI that beats graduates and not just students. If it costs as much per line of code as 1000 gpt-4-1106-preview[0] tokens, the cost of rewriting all of Red Hat Linux 7.1 from scratch[1] is less than 1 million USD.
[0] $0.03 / 1K tokens
[1] https://dwheeler.com/sloc/