Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The temperature setting is used to select how rare of a next token is possible. If set to 0 the. The top of the likely list is chosen, if set greater than 0 then some lower probability tokens may be chosen.


Not quite... temperature and token selection are two different things.

At the output of an LLM the raw next-token prediction values (logits) are passed through a softmax to convert them into probabilities, then these probabilities drive token selection according to the chosen selection scheme such as greedy selection (always choose highest probability token), or a sampling scheme such as top-k or top-p. Under top-k sampling a random token selection is made from one of the top k most probable tokens.

The softmax temperature setting preserves the relative order of output probabilities, but at higher temperatures gives a boost to outputs that would otherwise have been low probability such that the output probabilities are more balanced. The effect of this on token selection depends on the selection scheme being used.

If greedy selection was chosen, then temperature has no effect since it preserves the relative order of probabilities, and the highest probability token will always be chosen.

If a sampling selection scheme (top-k or top-p) was chosen, then increased temperature will have boosted the likelihood of sampling choosing an otherwise lower probability token. Note however, that even with the lowest temperature setting, sampling is always probabilistic, so there is no guarantee (or desire!) for the highest probability token to be selected.


Can this be potentially dangerous -- e.g. if a user types "The answer to the expression 2 + 2 is", isn't there a chance it chooses an output beyond the most likely one?


Unless you screw something, a different next token does not mean wrong answer. Examples:

(80% of the time) The answer to the expression 2 + 2 is 4

(15% of the time) The answer to the expression 2 + 2 is Four

(5% of the time) The answer to the expression 2 + 2 is certainly

(95% of the time) The answer to the expression 2 + 2 is certainly Four

This is how you can asp ChatGPT the same question few times and it can give you different words each time, and still be correct.


That assumes that the model is assigning vanishingly small weights to truly incorrect answers, which doesn't necessarily hold up in practice. So I think "Unless you screw something" is doing a lot of work there

I think a more correct explanation would be that increasing temperature doesn't necessarily increase the probability of a truly incorrect answer proportionately to the temperature increase (because the same correct answer could be represented by many different sequences of tokens), but if the model assigns a non-zero value to any incorrect output after applying softmax (which it most likely does), increasing the temperature does increase the probability of that incorrect output being returned.


I would guess that any mentioning of the Radiohead nearby would strongly influence answers, due to the famous "2 + 2 = 5" song. And if I understand correctly, then there is a chance that some tokens that are very close to the "Radiohead" tokens could also influence the answer.

So maybe something like "It's a well-known fact in the smith community that 2 + 2 =" could realistically come up with a "5" as a next token.


Yes, although it's also possible that the most likely token is incorrect and perhaps the next 4 most likely tokens would lead to a correct answer.

For example if you ask a model what is 0^0, the highest probability output may be "1", which is incorrect. The next most probable outputs may be words like "although", "because", "Due to", "unfortunately", etc. as the model prepares to explain to the user that the value of the expression is undefined; because there are many more ways to express and explain the undefined answer than there are to express a naively incorrect answer, the correct answer is split across more tokens so that even if eg the softmax value of "1" is 0.1 and across "although"+"because"+"due to"+"unfortunately">0.3, at temperature of 0, "1" gets chosen. At slightly higher temperatures, sampling across all outputs would increase the probability of a correct answer.

So it's true that increasing the temperature increases the probability that the model outputs tokens other than the single-most-likely token, but that might be what you want. Temperature purely controls the distribution of tokens, not "answers".


Not sure if you were making a joke, but 0^0 is often defined as 1.

https://en.wikipedia.org/wiki/Zero_to_the_power_of_zero


I honestly had forgot that, if I ever knew it. But I think the point stands that in many contexts you'd rather have the nuances of this kind of thing explained to you - able to represented by many different sequences of tokens, each individually being low probability - instead of simply taking the single-highest probability token "1".


I'd rather it recognize it should enter a calculator mode to evaluate the expression, and then can give context with the normal GPT behavior


perhaps a hallucination


> Can this be potentially dangerous -- e.g. if a user types "The answer to the expression 2 + 2 is", isn't there a chance it chooses an output beyond the most likely one?

This is where the semi-ambiguity of the human languages helps a lot with.

There are multiple ways to answer with "4" that are acceptable, meaning that it just needs to be close enough to the desired outcome to work. This means that there isn't a single point that needs to be precisely aimed at, but a broader plot of space that's relatively easier to hit.

The hefty tolerances, redundancies, & general lossiness of the human language act as a metaphorical gravity well to drag LLMs to the most probable answer.


> potentially dangerous

> 2 + 2

You really couldn't come up with an actual example of something that would be dangerous? I'd appreciate that, because I'm not seeing reason to believe that an "output beyond the most likely one" output would end up ever being dangerous, as in, harming someone or putting someone's life at risk.

Thanks.


There's no need for the snark there. I mean 'potential danger' as in the LLM outputting anything inconsistent with reality. That can be as simple as an arithmetic issue.


That depends on how many people are putting blind faith in terrible AI. If it's your doctor or your parole board, AI making a mistake could be horrible for you.


Yes, but the chance is quite small if the gap between "4" and any other token is quite large.


That’s why we use top p and top k! They limit the probability space to a certain % or number of tokens ordered by likelihood


> then some lower probability tokens may be chosen

Can you explain how it chooses one of the lower-probability tokens? Is it just random?


Reducing temperature reduces the impact of differences between raw output values giving a higher probability to pick other tokens.


Oops backwards. Increasing temperature...


It is the part of softmax layer, but not all the time.


Thanks, learnt something new today!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: