Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is a thing I think about often which I always conclude is not currently possible in the way I wish it were.

While we can make a concept graph, what I often wonder is whether it's really possible to make a computer think of a thing, truly have an idea of it in their "head" the way it is for a person.

When you think of an apple, you don't just connect to a text description of it and a picture and a bunch of links to "fruit" and "seed" and "food." You sort of see it and feel it and taste it and know its value. It's rendered in senses, not in text.

I am not confident that it will be possible for a computer to understand something that way for a very long time. I think until we understand how that information is encoded in our own minds, getting a machine to truly understand it the same way will be elusive.

When I was recently considering this, the fundamental difference I came down to was this: a living thing wants things, needs things. So long as a computer does not have any desires, I just don't see how it could ever understand the world the way we do. What would anything matter to you if you didn't eat, drink, sleep, feel, get bored, get curious?

I think those aspects of a living thing drive our understanding of everything else. Without that, it's all just text.

But of course I do understand perfectly that I am speaking of a longer timeline sort of project and that a Probase-like component is still a big part of it and can still independently move things forward quite a bit.



What you're looking for is called the symbolic grounding problem:

| But as an approach to general intelligence, classical symbolic AI has been disappointing. A major obstacle here is the symbol grounding problem [18, 19]. The symbolic elements of a representation in classical AI – the constants, functions, and predicates – are typically hand-crafted, rather than grounded in data from the real world. Philosophically speaking, this means their semantics are parasitic on meanings in the heads of their designers rather than deriving from a direct connection with the world. Pragmatically, hand-crafted representations cannot capture the rich statistics of realworld perceptual data, cannot support ongoing adaptation to an unknown environment, and are an obvious barrier to full autonomy. By contrast, none of these problems afflict machine learning. Deep neural networks in particular have proven to be remarkably effective for supervised learning from large datasets using backpropagation. [..] The hybrid neuralsymbolic reinforcement learning architecture we propose relies on a deep learning solution to the symbol grounding problem.

Source: Marta Garnelo et al: Towards Deep Symbolic Reinforcement Learning https://arxiv.org/pdf/1609.05518.pdf


Well that and the computer hasn't had years of experiences with apples and Apples as well... years of understanding how they taste, that they get paired with things, are more parts of meals with children, are connected with biblical stories, can be thrown, how it fits in cultural contexts (such as jewish new years), etc.

It's not just about perceptual data of an apple but rather having LIVED apples and absorbed their millions of data points. I'm skeptical for how far AI can go from statistics on text alone, NN or otherwise.


>> Pragmatically, hand-crafted representations cannot capture the rich statistics of realworld perceptual data, cannot support ongoing adaptation to an unknown environment, and are an obvious barrier to full autonomy.

Pragmatically machine learning systems can't do any of those things either. In principle they can, but in practice they need so much data and training must take up so many resources (not least the ones needed for supervision, i.e. annotations), that creating a truly autonomous system is unfeasible. Which is why we don't have such systems yet, even though we've had machine learning for a good few decades now.

>> Deep neural networks in particular have proven to be remarkably effective for supervised learning from large datasets using backpropagation.

Oh yes, absolutely- in laboratory conditions and in well-circumscribed tasks (image recogntion from photographs, say). In the noisy, dirty, hostile real world, not so much.

We still have a long way to go before we get to the wholy grail. We're not even at the beast of AAAaaargh yet. And remember what pointy teef that one's got.

(apologies for speaking in allegories- I mean that we haven't yet tackled the hardest problems, because we've yet to encounter them. We're stuck with "low-hanging fruit" as, I believe, Andrew Ng has said).

____________

Edit: But, um- that's a really nice paper. Thanks.


Thank you for linking me to this. I had never heard of it. That is exactly it.


Besides "symbolic grounding" also look up "word vectors". It is an attempt to ground words in the statistical probability of their surrounding words in very large bodies of text.


I also recommend 'Ventus' by Karl Schroeder. It's a fun scifi read, covers some of these concepts and can be downloaded for free: http://www.kschroeder.com/my-books/ventus/my-books/ventus/fr...


FWIW, I don't think computers will need to think of a thing any more than they need to "learn" that 1 + 1 = 2. When you break down how logic circuits work to the point where you are describing a full adder in gates you can see that 1 and 1 "has" to be 2. And when you start tracing out the n-dimensional graph space of concepts you will see that "understanding" that Oct 13, 1881 is a "date in time" is simply because that is the only concept in the graph near the nodal points.

It is exceedingly challenging to conceptualize n-dimensional topologies given our three dimensional (or four) up bringing, but when you consider that the definition of a dimension is a direction orthogonal to all other dimensions you can think of the orthogonal things that might be on another dimension and topologically connect.

For example, 13 is a number, its a prime, it can be a month day, it can be a street address, Etc. You can think of '13' as a dimension which is orthogonal to all of those other dimensions (numbers, primes, dates, addresses, Etc) such that it spears through them. Now when you see "oct" it also spears through a bunch of alternate dimensions but the only dimensions that both Oct and 13 exist in is the 'date' dimension and maybe the ASCII dimension (13 can be an Octal number). But add the 1881 and the three of them no land pretty solely in the "dates" plane of existence.

The trick is searching the solution space of n-dimensions in finite time. Certainly something a quantum computer might achieve more easily than a Von-neumann machine but given that the dimensional space is nominally parallelizable (at the expense of memory of course) I expect you can get fast enough with enough compute threads.

Another challenge is constructing the concept graph to begin with but there is lots of great research going into combining ontologies with natural language processing in order to build concept graphs. If I were getting a Phd today I'd probably be working on that particular problem.


What do you think of learning these things by watching films ?

We have hundreds of thousands of hours of programming right from the Kindergarten level.


That feels to me like stacking challenges because image interpretation is a challenge on top of concept generation as a challenge.

That said, there is the problem of believing your initial concepts. And, like people, if you start with a bunch of bogus concepts its going to be hard to break free of that and establish concepts more liberally. I think about it as the question of not only establishing the concepts but establishing the validity of the concepts that have been established. In a very sparse concept space your "best match" can be really far off from what someone with a more filled out concept space would consider valid.


We don't really know what it is for a human to "think" of something. Introspection of what we think it is does not lead us to understand what it is to think of something. I think our needs often limits our understanding of what things are. For instance, our need to satisfy hunger means we are mostly considering apple in terms of utility to do that. Science lets us view apples in a much more detailed way, it's chemical make up, it's biology, etc. We often don't have that all in mind when we think of an apple however. But a computer, potentially could have a much more pervasive view of an apple. It may conceptualize things in ways we can't. I think that might be more interesting. That it will understand the world in a way we don't ( which may include the subset of how humans understand things )

However, we are struggling at the first steps of doing this and still unsure of whether it's even computable.


> We don't really know what it is for a human to "think" of something. Introspection of what we think it is does not lead us to understand what it is to think of something.

This doesn't seem right. Introspection might not give us all the answers, but it's a critical (and probably the single most important) aspect of understanding how we think. Entire branches philosophy deal specifically with this and have done so for thousands of years.

I personally found Descartes' thoughts particularly interesting in this regard. Also, here's a pretty good overview on introspection in contemporary philosophy: http://plato.stanford.edu/entries/introspection/


I think (heh) introspection allows us to characterise the nature of thinking, but we started making a lot more progress about how our brains work when we started down the neuro science path. Philosophy hasn't answered much in the way of how we think, but it asks a lot of questions about the nature of thinking, what we can know, what constitutes a mind, how a mind relates to a body. How we can recognise that something has a mind. But it has done very little in the way to answer what is it that allows us to think.

Descartes dealt with rational thought and what we can know absolutely. He wanted a logical progression so we can prove everything from fundamental truths.

Philosophically the philosophy of mind is most related, and I guess one of the famous problems is https://en.wikipedia.org/wiki/Chinese_room that we can't really resolve yet


To put it another way, a human learns by experience. When you think of "an apple", you think of various sensory experiences in which an apple has played a role; eating the apple, throwing the apple, etc. At some level, these experiences are finite; an apple corresponds to a physical object, and there are only a few thousand ways that humans and objects interact.

A better consideration is whether computers should be limited to understanding things the way humans understand them. Sensors can characterize apples in uncommon ways; X-rays, microwaves, nanoscale structures, etc. Similarly machinery can interact with apples in ways that humans cannot, such as vaporizing them, disassembling them, or launching them to Mars. Perhaps some combinations of action verbs and nouns are impractical or impossible; that's a physical, experimental property, rather than a property tied to human experience. At the end of the day a computer only needs to know about humans in order to interact with them; its representation of the world is distinct.


I think text is too high of a concept for this kind of thing, all your examples of senses are really just input memory. Text to a computer is simply input and can mean different things in different contexts. Is the binary string coming out of some temperature sensor really any different than a digital audio stream, or string of text? It's simply a case of mapping those inputs to a concept of feeling, hearing or reading.

I think the real challenge to this kind of approach is always going to be raw processing power and size of data sets. Our brains may not be incredibly efficient, but they have so much more stuff to work with over even our largest data centers and the amount of data that comes to us every living moment is basically infinite compared to curated sets we feed our current learning machines.

So imitating the way people learn like this is probably the key to getting something to properly "think", I just wonder when the resources available to our computers will catch up to the resources available to our brains.


If you remove the ability for a human to feel emotions, they have a very hard time making decisions. They can still reason about their options, but they just can't decide. I don't have the citation at hand but there are actual case studies.

Although we think that we make decisions rationally, the reality is that we make decisions emotionally. Our rationality is not the master of our emotions--it serves them.

So if you want a computer to think like a person, you need to give a computer emotions. To my knowledge there is very little academic work in this direction. To use my favorite example, no one is trying to build a self-driving car that just doesn't feel like driving that day.

And to return to the point above, we think that we think a certain way. But when we think about our thoughts, we're using the same mind that we're analyzing. It's certainly possible that we are fooling ourselves. Maybe even we don't think about things the way you describe--but we can't tell the difference, because we can't get out of our own minds, or into someone else's mind.


I think the reason that there is little academic work in the direction of making emotional machines is because we don't have a clear avenue of attack for that problem. We understand so little about the brain in general, and emotions seem buried near the bottom of that mystery.

Psychoactive drugs and hormones are so good at altering emotional state that it doesn't seem implausible that emotions might be as simply "implemented" as logical reasoning, or that emotions and human logic are in fact different shades of the exact same biological system. The hope would then be that emotions will emerge automatically once we've developed a system of sufficiently complex thinking.

Even more extreme, some people hold the belief that consciousness itself is a sort of post facto illusion—that we don't truly "think" at all, and everything we perceive is a backwards looking rationalization that arises as an accident of the complex chemistry of the brain. Timed brain scans seem to superficially support this philosophy. If this is the case, then building mammal-like machine intelligence may not be so mysterious in the long run, though this raises some pretty mind-bending ethical and philosophical issues.

That all said, I fundamentally agree with your point. It certainly seems like there is very little work, if any, that's advancing our understanding of how to do anything other than optimize certain tasks. Those tasks are progressively becoming more and more complex, but they're still extremely narrow in scope. From where I sit, it seems like we'll have to solve a whole lot of "pointless" (unprofitable) problems before we come anywhere close to finding general AI. Not the least of these problems is our fundamental lack of understanding of what our own "thinking" even really is.


> The hope would then be that emotions will emerge automatically once we've developed a system of sufficiently complex thinking.

If we look at nature, we see the opposite: almost all animals seem to experience some sort of emotional reaction to stimulus, even if they don't seem capable of complex rational thinking.

> We understand so little about the brain in general, and emotions seem buried near the bottom of that mystery.

I agree: emotions seem more fundamental to thinking than rational symbolic reasoning.


> If we look at nature, we see the opposite: almost all animals seem to experience some sort of emotional reaction to stimulus, even if they don't seem capable of complex rational thinking.

Well, my (personal) par for "sufficiently complex thinking" is pretty low. I would say any animal we can perceive emotion in has far more complex thinking than the theoretical lower bar. I would take the perspective that emotions are probably present in some animals that are so non-human we don't assume they have consciousness.


The other amazing thing is it takes infants months if not years to grasp certain fundamental realities about the world. They also take in a massive amount of constant data that gets parsed through sensory inputs and lower level instincts before it even registers with emotions.

I would not be surprised if we find the secret is in building up from base instincts and flooding it with sensory data while we"parent' the AI.


Could emotion be modeled as a deviation (amplified or dampened) from an ideal rational response, given the information? Like a short circuit that allows sensation to override the rational processing?


Yesterday there was a short item on the Dutch radio about a query for Siri: "When will world war 3 happen?". Somehow Siri would give an exact date in the future but nobody knows why that date was chosen.

The concepts in the question are clear for current systems. 'When' is a clear concept about a time question (concept of past and future might be mixed up). 'World war 3' can also be a concept that current systems 'understand'.

Lets say there is a news article that says: "If Trump wins the elections world war 3 will happen". And another article says: "When Trump wins the poll on 2016-11-05 he might win the elections". Siri might combine this into: "World war 3 will happen on 2016-11-05".

But Siri doesn't know the context in which the question was asked. And I think the only way to get this right is:

  * ask about the context
  * track everything a user does to estimate the context
I think the movie Her[1] does this. The OS is constantly asking him questions so 'she' can learn about his context. And of course the first question the OS is asking is brilliant: "How is your relationship with your mother".

[1]http://www.imdb.com/title/tt1798709/


What you're describing sounds a lot like a statistically trained system.

"until we understand how that information is encoded in our own minds, getting a machine to truly understand it the same way will be elusive."

Here's a (fairly convincing imo) discussion as applied to language:

http://norvig.com/chomsky.html

Further, I think human emotions are pretty transparent -- e.g. why might people lust after high calorie foods?

The timeline is probably far shorter than you are describing here.


> what I often wonder is whether it's really possible to make a computer think of a thing

Trivial constructive proof that the answer is "yes"; as far as we know, it is physically possible to measure and then simulate a human brain to an accuracy much smaller than the thermal noise floor at normal brain temperature.

That is, you can always literally just run a human brain on a computer, and unless we're entirely wrong about all of physics, it will do everything a physical human brain would.

> So long as a computer does not have any desires

"Desire" is actually pretty well understood in the frameworks of decision theory and utility theory. You can always make a program "want" something in terms of that thing having a positive value in the program's utility function.

> What would anything matter to you if you didn't eat, drink, sleep, feel, get bored, get curious?

What would anything matter to you if you didn't shit, get pneumonia, and die? All the things you mentioned are just random things that humans happen to do; I'm not sure what it has to do with the concept of having preferences.

> Without that, it's all just text.

The representation doesn't really matter. Having desires is a property of the internal behavior of an agent, not how those behaviors are implemented.


I think about this as well. Is it possible to have a mind, the way we conceive it, without a body? I don't think so.

The very notion of oneself being apart from the world is, IMO, sensorial at first. Knowing the limits of your body is essential to defining oneself. A free floating conscience seems unfathomable.

We may need to infuse sensory inputs first before we can have a true AI.


There are some CNNs that will output heatmaps of where the classifier for a label is triggered most strongly in an image. Does that count as being "rendered in senses"?

Also, if you train the NN on purely textual data, there are no senses like you describe to associate it with, since its only senses are symbolic.


What's preventing there being an AI with hardcoded desire to regulate values like hunger level, boredom level, etc.? I could imagine a problem solving AI dedicated to continuously solving those problems.


But how?

hunger = 100;

while (hunger > 0) { seekFood(); }

// Is this what hunger is to a machine, at its basest level? An int and a while loop? Is that really what it means to understand hunger? This and a text description?


I mean hunger is really just a gut reaction created to tell us we need to eat or we'll die eventually.

edit: s/created/developed over time/


while(sugar = 0) seekFood()

And thus the grey goo was created


Hm no, you used = instead of ==, so this will never seek food. ;)


You probably mean that it will always seek food since the assignment evaluates to 'true' when it's successful (which is usually the case).


In what language? In all that I know, assignment evaluates to the value that was assigned (that is, if it evaluates to anything at all). Also in most languages that look like C, 0 evaluates to false. Therefore it will never seek food.


Usually the case as in usually never the case.


Many serious AGI efforts are aware of the need to ground learning in sensory data from (real or virtual) embodiment. Deep Mind is the most famous and popular one.


Well, you can have the best algorithm to optimize, but you still need to define various utility functions.

The field of embodied cognition attempts to approach that. https://en.wikipedia.org/wiki/Embodied_cognition

If you make a drone that feels pleasure when refueling and killing people, guess what the drone will do.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: