Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

LLMs by themselves don’t learn from past past mistakes, but you could cycle inference steps and fine tuning/retraining steps.

Also, you can store failed attempts and lessons learned in context.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: