Er, that's not how arguments work. What we can't know is that those trends will continue, so it's on you to demonstrate that they will, despite evidence suggesting they won't.
As for as what you linked, Altman is saying the same thing I'm saying:
> That doesn’t mean that OpenAI won't continue to try to make the models bigger, it just means they will likely double or triple in size each year rather than increasing by many orders of magnitude.
This is exactly my point; doubling or tripling of the size will be possible, but it won't result in a doubling of performance. We won't see a GPT 5 that's twice as good as GPT 4, for example. The jump from 2 to 3 was exponential. The jump from 3 to 4 was also exponential, though not as much. The jump from 4 to 5 will follow that curve, according to Altman, which means exactly what he said in my quote; the value will continue to decrease. For a 2 to 3 type jump, GPU technology would have to completely transform in capability, which there are no indications that we've found that innovation.
My argument was that improvement from scale would continue. There is absolutely evidence suggesting this.
Gpt-4 can perform nearly all tasks you throw at it with well above average human performance. There literally isn't any testable definition of intelligence it fails that a big chunks of humans wouldn't also fail.
You seem to keep missing the fact that We do not need an exponential improvement from 4.
> Gpt-4 can perform nearly all tasks you throw at it with well above average human performance.
It can't even generate flashcards from a textbook chapter, because it can't load the entire chapter into memory. Heck, it doesn't even know what textbook I'm talking about; I have to provide the content!
It fails constantly at real world coding problems, and often does so silently. If you tried to replace a software developer with GPT 4, you would be left with a gaping productivity hole where that developer you replaced once existed. The improvement GPT 5 would have to provide is multiple orders of magnitude in order for this to be a realistic proposition.
I use it daily and know better than to trust its output.
>It can't even generate flashcards from a textbook chapter, because it can't load the entire chapter into memory. Heck, it doesn't even know what textbook I'm talking about; I have to provide the content!
Okay...? That's a context window problem. and you could manage it if you sent the textbook in chunks.
>The improvement GPT 5 would have to provide is multiple orders of magnitude in order for this to be a realistic proposition.
So by your own words, in order to use the LLM usefully, I need to manually manage it? Do you know what I don’t have to manually manage? A person.
I can feed a person a broad, complex or even under formed idea and they can actively troubleshoot until the problem is resolved, further monitoring and tweaking their solution so the problem remains resolved. LLMs can’t even come close to doing that.
You’re proving my point for me; it’s a tool, not a developer. Zero jobs are at risk.
Also not for nothing, but no, sending the textbook in chunks doesn’t work as the LLM can’t then synthesize complex ideas that span the entire chapter. You have to compose a set of notes first, then feed it the notes, and even then the resulting flashcards are meaningfully worse than what I could come up with myself.
As for as what you linked, Altman is saying the same thing I'm saying:
> That doesn’t mean that OpenAI won't continue to try to make the models bigger, it just means they will likely double or triple in size each year rather than increasing by many orders of magnitude.
This is exactly my point; doubling or tripling of the size will be possible, but it won't result in a doubling of performance. We won't see a GPT 5 that's twice as good as GPT 4, for example. The jump from 2 to 3 was exponential. The jump from 3 to 4 was also exponential, though not as much. The jump from 4 to 5 will follow that curve, according to Altman, which means exactly what he said in my quote; the value will continue to decrease. For a 2 to 3 type jump, GPU technology would have to completely transform in capability, which there are no indications that we've found that innovation.