We know with reasonable certainty that an LLM fed on enough chess games will eventually develop an internal chess model. The only question is whether GPT4 got that far.
So can humans. And nothing stops probabilities in a probabilistic model from approaching or reaching 0 or 1 unless your architecture explicitly prevents that.
We know with reasonable certainty that an LLM fed on enough chess games will eventually develop an internal chess model. The only question is whether GPT4 got that far.