They aren’t really designed to do anything actually. LLMs are models of human languages - it’s literally in the name, Large Language Model .
https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-...
I’m sorry but I don’t trust something that uses a random number generator as part of its output generation.
No. And the article you linked to does not say that (because Wolfram is not an idiot).
Transformers are designed and trained specifically for solving NLP tasks.
> I’m sorry but I don’t trust something that uses a random number generator as part of its output generation.
The human brain also has stochastic behaviour.
They aren’t really designed to do anything actually. LLMs are models of human languages - it’s literally in the name, Large Language Model .
https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-...
I’m sorry but I don’t trust something that uses a random number generator as part of its output generation.