Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

LLMs produce non-reproducible output in response to prompts, and this puts the endless stream of articles about 'the biased LLM said x when I asked' in questionable territory. If the article making the claim doesn't provide the explicit prompt(s) they used to get the output they're so upset about, then it shouldn't be taken seriously.


> LLMs produce non-reproducible output in response to prompts

LLM output is reproducible if you choose the correct settings. (Consumer frontends may not expose those settings, but consumer frontends and LLMs are not the same thing.)


Additionally, if you use Gemini/ChatGPT via the VertexAI/OpenAI API instead of the consumer frontend, it become much less moderated, which typically involves running another LLM (among multiple moderation tools) on the response from the underlying LLM




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: