Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Because they don't distinguish between properties of the output and properties of the system which generated it. Indeed, much of the last decade of computer-science-engineering has basically been just insisting that these are the same.

An LLM can generate output which is indistinguishable from a system which reasoned/knew/imagined/etc. -- therefore the "hopeium / sky is falling" manic preachers call its output "reasoned" etc.

Any actual scientist in this field isn't interested in whether measures of a system (its output) are indistinguishable, they're interested in the actual properties of the system.

You don't get to claim the sun goes around the earth just because the sky looks that way.



Do submarines swim? No, but they are faster underwater than all swimmers. Therefore they are the best swimmers despite being unable to swim....

LLMs are producing human level reasoning in many domains, therefore they are the best at reasoning despite being unable to reason...

This whole debate hangs on the definition of "reasoning"


Scientists are extremely interested in measurable results of experiments. I think you are thinking of philosophers.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: