Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I am sorry, but I thought it was a bold assumption it has an internal model of chess?


Having an internal model of chess and maintaining an internal model of the game state of a specific given game when it's unable to see the board are two very different things.

EDIT: On re-reading I think I misunderstood you. No, I don't think it's a bold assumption to think it has an internal model of it at all. It may not be a sophisticated model, but it's fairly clear that LLM training builds world models.


Not that bold, given the results from OthelloGPT.

We know with reasonable certainty that an LLM fed on enough chess games will eventually develop an internal chess model. The only question is whether GPT4 got that far.


Doesn't really seem like an internal chess model if it's still probabalistic in nature. Seems like it could still produce illegal moves.


So can humans. And nothing stops probabilities in a probabilistic model from approaching or reaching 0 or 1 unless your architecture explicitly prevents that.


Why?

Or, given https://thegradient.pub/othello/, why wouldn't it have an internal model of chess? It probably saw more than enough example games and quite a few chess books during training.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: