Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What are you calling reasoning? Is reasoning something entirely separate from choosing what word is expected next in a sentence based on the logic of the words? You can give GPT-3 things like "Alice was friends with Bob. Alice went to visit her friend _____" and "10 + 10 = 20. 20 + 20 = ___" and get right answers. You can tell it the definition of a made-up word and ask it to use the word in a sentence, and it can come up with a sentence that uses it in a relevant context. You can give it a short story and ask it questions about the story. All of these go far beyond basic syntactic concerns like putting subjects and verbs in the right spots.


> Is reasoning something entirely separate from choosing what word is expected next in a sentence based on the logic of the words?

Yes, otherwise a markov chain would be reasoning, too.

> "10 + 10 = 20. 20 + 20 = ___"

You can do the same in Prolog and similar languages. Is the language/the compiler reasoning?

> All of these go far beyond basic syntactic concerns like putting subjects and verbs in the right spots.

I'm not sure about that. The output obviously is a wonderful achievement, and will find a multitude of applications, but isn't it technically still a stochastic model describing the probability of a sequence of events, albeit at unprecedenced scale?

Reasoning needs conscious thoughts, sentience, awareness, and we don't have the slightest hints these are present in GPT. Yes, humans reason about something and then write a coherent text, but that doesn't mean that the presence of a coherent text is proof of reasoning - just as the absence of the capability to produce coherent text isn't proof of the absence of reasoning (e.g. in the horrible cases of the locked-in syndrome https://en.wikipedia.org/wiki/Locked-in_syndrome )


>> You can do the same in Prolog and similar languages. Is the language/the compiler reasoning?

Yes, absolutely. The Prolog interpreter is a resolution-based theorem prover. You will be very hard pressed to find any AI researcher arguing strongly that "that's not reasoning". In fact I believe automated theorem proving is one of the very few cases were you can find so little disagreement about whether something is "reasoning" or not.

Note also that "reasoning" is not necessary to solve a problem like "10 + 10 = 20. 20 + 20 = ___" (1). Knowlege of addition is sufficient: given knowledge of addition the result of "20 + 20" can be derived without reference to "10 + 10 = 20". And, absent knowledge of addition, even if a system can answer (1), it will not be able to answer the majority of similar problems, indicating that it has only memorised the answer to (1).

A better test of what a system has learned is a question like "10 + 10 = 30. 20 + 20 = ___" (2). The answer to that should still be "40", but again that's not because of any reasoning; it's because "20 + 20" is always "40", even when preceded by a false statement. So this kind of question is really not any way to test reasoning abilities.

Edit: Actually, "Alice was friends with Bob. Alice went to visit her friend ___" (3) is not a very good test for reasoning, either. If I were to answer "Alice", would you be able to say whether that's true or false? The only way to make such a decision is in the context of a "closed world assumption" (incidentally, central to Prolog's theorem proving). However, now you're making a much more precise claim, that "GPT-3 has learned to answer questions by making a closed-world assumption". You can test this claim much more convincingly by asking questions like "Alice is friends with Bob. Is alice friends with Alice"? The answer should be "no" (or "false", "incorrect", etc).

Has this kind of more formal test been carried out, with GPT-x?


> You can do the same in Prolog and similar languages. Is the language/the compiler reasoning?

Yes, of course. Reason isn't magic.


Pro log is literally ‘logic programming’. It’s right in the name.


No, the compiler is following rules humans implemented. The humans were reasoning. The compiler follows a well-defined process. This is also what a scientific calculator can do - but the calculator isn't an example of AI, either.

Many linguists think that every human language follows fundamental patterns (e.g. [0]). In that context, the achievement of GPT is that it indirectly derived such a model by working through ungodly amounts of data. The results sound meaningful for us - but that doesn't imply that GPT intended meaning.

Every theory of reason I know has consciousness as a hard requirement. I'm not trying to be pedantic, but the topic of this thread is exactly the kind where clear definitions of words are important.

If Prolog is reasoning, then a scientific calculator is, too. But now we just need another word for the thing that differentiates us from calculators.

[0] https://en.wikipedia.org/wiki/Principles_and_parameters


What sort of definition of reasoning implies or requires consciousness? I haven’t seen one.


Those from Locke, Hume, Kant, Habermas, among others.


Is that then really a definition that carves reality at the joints? What is it that the AI will actually be hindered in doing, that is described by its inability to meet this definition?

If I think about a process using a certain mechanism, and the AI thinks about a process using a similar mechanism, but also I have a consciousness attached on top and the AI does not, then it seems petty to assign these processes different labels based on a component whose mechanical relevance is not shown. I'm not doubting the impact of conscious, reflective reasoning on human capability, mind! But most of the thinking I do is not that.

Also as a general rule, you should be skeptical of considerations of reason that are based largely on introspection; the process is inherently biased towards consciousness as a load-bearing element, since consciousness is so heavily involved in the examination.


These are very good points! Current theories of reason are obviously assuming human minds. Still, even if one wants to create a new definition that includes AGIs, there has to be some concept of agency, of wanting to achieve something, with the capability being the means to that end. The capability alone isn't what brings us closer to AGI.


well, the same way, a neural network follows rules humans implemented. With a little bit of mathematical optimization to actually describe a problem!


I think he meant GPT-3 does zero human reasoning.


Do we know that humans do reasoning? People talk much much faster than they could work out anything like what people consider to be logical reasoning.

Remember that logic had to be invented by Aristotle. It was a mechanical system meant to approximate how humans make decisions naturally.


Well, of course. It’s not human. Even a superintelligent AGI would likely do zero ”human reasoning” unless it wanted to do authentic emulation of a meatbrain for whatever reason. Like keeping uploaded humans running.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: