Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This comment is surprising. Of course it can have preferences and of course it can "pick".


preference generally has connotations of personhood / intellegence, so saying that a machine prefers something and has preferences is like saying that a shovel enjoys digging...

Obviously you can get probability distributions and in an economics sense of revealed preference say that because the model says that the next token it picks is .70 most likely...


A key point of the Turing Test was to stop the debates over what constitutes intelligence or not and define something objectively measurable. Here we are again.

If a model has a statistical tendency to recommend python scripts over bash, is that a PREFERENCE? Argue it’s not alive and doesn’t have feelings all you want. But putting that aside, it prefers python. Saying the word preference is meaningless is just pedantic and annoying.


An LLM then has a preference for Python in the same way a gömböc[1] has a preference to land right-side-up. From my side of the argument, calling this a preference like I have a preference for vanilla milkshakes over coke or my dog has a preference for the longer walk over the short one is what seems pedantic. There is a difference, at least in language, between mechanical processes and living beings' decision-making.

Perhaps instead of "preference", "propensity" would be a more broadly applicable term?

[1]https://en.wikipedia.org/wiki/G%C3%B6mb%C3%B6c


The Turing Test is just one guy's idea, there's no consensus on sentience that considers it the last word.


Would you say that dice have a preference for landing on numbers larger than 2? because I've noticed they tend to do so about two-thirds of the time.


I would. Certainly if somebody asked me which way dice have a preference to land, over two or not, I’d say “over two” and not say “that’s a stupid question because dice don’t have opinions”. But to each their own.

Try explaining ionic bonds to a high schooler without anthropomorphising atoms and their desires for electrons. And then ask yourself why you’re doing that? It’s easier to say and understand with the analogy.


Is it easier to understand? Is not the root of this sub-thread the fact that someone had false expectations of an LLM's behavior?


you can change preferences by doing RLHF or changing the prompt. there's a whole field on it: alignment.


I agree with you, but I don’t find the comment surprising. Lots of people try to sound smart about AI by pointing out all the human things that AI are supposedly incapable of on some fundamental level. Some AI’s are trained to regurgitate this nonsense too. Remember when people used to say “it can’t possibly _____ because all it’s doing is predicting the next most likely token”? Thankfully that refrain is mostly dead. But we still have lots of voices saying things like “AI can’t have a preference for one thing over another because it doesn’t have feelings.” Or “AI can’t have personality because that’s a human trait.” Ever talk to Grok?




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: