Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What you're looking for is called the symbolic grounding problem:

| But as an approach to general intelligence, classical symbolic AI has been disappointing. A major obstacle here is the symbol grounding problem [18, 19]. The symbolic elements of a representation in classical AI – the constants, functions, and predicates – are typically hand-crafted, rather than grounded in data from the real world. Philosophically speaking, this means their semantics are parasitic on meanings in the heads of their designers rather than deriving from a direct connection with the world. Pragmatically, hand-crafted representations cannot capture the rich statistics of realworld perceptual data, cannot support ongoing adaptation to an unknown environment, and are an obvious barrier to full autonomy. By contrast, none of these problems afflict machine learning. Deep neural networks in particular have proven to be remarkably effective for supervised learning from large datasets using backpropagation. [..] The hybrid neuralsymbolic reinforcement learning architecture we propose relies on a deep learning solution to the symbol grounding problem.

Source: Marta Garnelo et al: Towards Deep Symbolic Reinforcement Learning https://arxiv.org/pdf/1609.05518.pdf



Well that and the computer hasn't had years of experiences with apples and Apples as well... years of understanding how they taste, that they get paired with things, are more parts of meals with children, are connected with biblical stories, can be thrown, how it fits in cultural contexts (such as jewish new years), etc.

It's not just about perceptual data of an apple but rather having LIVED apples and absorbed their millions of data points. I'm skeptical for how far AI can go from statistics on text alone, NN or otherwise.


>> Pragmatically, hand-crafted representations cannot capture the rich statistics of realworld perceptual data, cannot support ongoing adaptation to an unknown environment, and are an obvious barrier to full autonomy.

Pragmatically machine learning systems can't do any of those things either. In principle they can, but in practice they need so much data and training must take up so many resources (not least the ones needed for supervision, i.e. annotations), that creating a truly autonomous system is unfeasible. Which is why we don't have such systems yet, even though we've had machine learning for a good few decades now.

>> Deep neural networks in particular have proven to be remarkably effective for supervised learning from large datasets using backpropagation.

Oh yes, absolutely- in laboratory conditions and in well-circumscribed tasks (image recogntion from photographs, say). In the noisy, dirty, hostile real world, not so much.

We still have a long way to go before we get to the wholy grail. We're not even at the beast of AAAaaargh yet. And remember what pointy teef that one's got.

(apologies for speaking in allegories- I mean that we haven't yet tackled the hardest problems, because we've yet to encounter them. We're stuck with "low-hanging fruit" as, I believe, Andrew Ng has said).

____________

Edit: But, um- that's a really nice paper. Thanks.


Thank you for linking me to this. I had never heard of it. That is exactly it.


Besides "symbolic grounding" also look up "word vectors". It is an attempt to ground words in the statistical probability of their surrounding words in very large bodies of text.


I also recommend 'Ventus' by Karl Schroeder. It's a fun scifi read, covers some of these concepts and can be downloaded for free: http://www.kschroeder.com/my-books/ventus/my-books/ventus/fr...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: