Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Not that bold, given the results from OthelloGPT.

We know with reasonable certainty that an LLM fed on enough chess games will eventually develop an internal chess model. The only question is whether GPT4 got that far.



Doesn't really seem like an internal chess model if it's still probabalistic in nature. Seems like it could still produce illegal moves.


So can humans. And nothing stops probabilities in a probabilistic model from approaching or reaching 0 or 1 unless your architecture explicitly prevents that.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: