Can you elaborate on the temperature parameter? Is this something you can configure in the standard ChatGPT web interface or does it require API access?
GPT basically reads the text you have input, and generates a set of 'likely' next words (technically 'tokens').
So for example, the input:
Bears like to eat ________
GPT may effectively respond with Honey (33% likelihood that honey is the word that follows the statement) and Humans (30% likelihood that humans is the word that follows this statement). GPT is just estimating what word follows next in the sequence based on all it's training data.
With temperature = 0, GPT will always choose "Honey" in the above example.
With temperature != 0, GPT will add some randomness and would occasionally say "Bears like to eat Humans" in the above example.
Strangely a bit of randomness seems to be like adding salt to dinner - just a little bit makes the output taste better for some reason.
It requires API access, but once you have access you can easily play around with it in the openai playground.
Setting temperature to 0 makes the output deterministic, though in my experiments it's still highly sensitive to the inputs. What I mean by that is while yes, for the exact same input you get the exact same output, it's also true that you can change one or two words (that may not change the meaning in any way) and get a different output.
It requires API access, temperature=0 means completely deterministic results but possibly worse performance. Higher temperature increases "creativity" for lack of a better word, but with it, hallucination & gibberish.