How good is the quality of this? BLOOM is a 176B parameter model, but it doesn't seem to compare to GPT-3 (175B parameters) in terms of output quality.
It's because BLOOM is undertrained, you can prune a lot of weights in BLOOM and it doesn't impact performance. Look at Chinchilla paper[1], 70B model outperforms 175B GPT-3 model.
In general, most giant LLMs are extremely undertrained at this time. Consider that most of the gains in RoBerta vs bert were from just continuing to train.
Out of curiosity, how did your measure their respective performances? My understanding is that BLOOM roughly comparable to GPT-3 in performance on most NLP tasks. Were you comparing OpenAI davinci to raw BLOOM by any chance?