Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

TFA claims that they managed to replicate "RLHF" type thing that would allow you to bark orders at the raw GPT-3 model and get palatable results back (as opposed to often repetitive nonsense output of the raw model). You won't be able to run this in your terminal as GPT-3 alone consumes nearly 400GB of RAM plus whatever post processing you do with it. At the moment, there is no obvious use case for it apart from running a ChatGPT competitor. On the other hand, there was no obvious use case for electricity for nearly a century. But one can speculate that we're getting closer and closer to a "lightbulb" moment for AI.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: