Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

How good is the quality of this? BLOOM is a 176B parameter model, but it doesn't seem to compare to GPT-3 (175B parameters) in terms of output quality.


It's because BLOOM is undertrained, you can prune a lot of weights in BLOOM and it doesn't impact performance. Look at Chinchilla paper[1], 70B model outperforms 175B GPT-3 model.

https://arxiv.org/abs/2203.15556


In general, most giant LLMs are extremely undertrained at this time. Consider that most of the gains in RoBerta vs bert were from just continuing to train.


Cases of undertraining can be observed whenever the output is repeating gibberish or loops. Happened a lot in GPT2 ai dungeon days


So can we continue training RoBERTa to get it to, say, GPT3 Ada level


Out of curiosity, how did your measure their respective performances? My understanding is that BLOOM roughly comparable to GPT-3 in performance on most NLP tasks. Were you comparing OpenAI davinci to raw BLOOM by any chance?


Compared ChatGPT to BLOOM - which I know doesn't benefit from RLHF.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: