"I am certain that HTM networks applied to applications like vision will not exhibit the problems talked about in this paper. Brains won’t either."
Put up or shut up, I say. If you have a model that works better, demonstrate it. ANNs simply crush every competitor in real-world performance.
Modeling brains is a cool idea but it doesn't have any practical advantages unless it improves learning performance or has medical applications. Neither is true of any ANN competitor so far.
Honestly, I think we're much more likely to eventually understand the brain by first focusing on building working AI systems of our own design and then comparing them to the brain (with the help of our AIs!), rather than reverse engineering the brain first and building systems that mimic it.
He's been doing and funding neuroscience research for over a decade and has built some impressive predictive systems. Leading researchers, such as Andrew Ng (the Director of the Stanford Artificial Intelligence Lab), have stated that Jeff's publications have directly influenced their thinking.
That said, I agree machine intelligence will initially be the equivalent of building machines that fly without flapping, but Jeff's approaches don't forbid this (he often deviates from nature in his models, but tries to stay close).
Do you have any links to examples? I'm not aware of any systems built with HTMs that I'd consider "impressive" relative to what ANNs are doing these days, e.g. learning to play Atari games by watching the screen [1], approaching human performance on object classification in unrestricted images [2], or learning to execute simple python programs by reading the source code [3].
Oh don't get me wrong, the things being done with ANNs are incredibly impressive. Far more so than anything I've seen with HTMs to date. I was just pointing out that Jeff is putting his money where his mouth is. Also, much of the resurgence in interest in AI came about 12 years ago in large part due to Jeff's thinking and writings (which was mostly consolidating existing ideas into a cohesive theory), as Andrew Ng and others have stated[1]. Jeff helped kick AI out of a plateau and avoid another AI winter.
For interesting things that Numenta has done, just check their website[2]. Their most popular product is likely Grok[3], which predicts things like server loads. It's nothing that ANNs couldn't do, and ultimately isn't that impressive at face value, but the implementation is interesting and has far reaching implications (w/regards to things like energy efficiency and other areas).
I think the future holds incredible things for ANNs and we'll continue to see great breakthroughs with them, but HTMs hold a lot of promise too. ANNs have been around quite a bit longer and have orders of magnitude more researchers working with them, so I'd argue it's premature to dismiss HTMs just because a handful of researchers in Redwood, CA haven't solved every AI problem within 12 years.
It was only 15-20 years ago that ANNs were considered to have peaked and been surpassed by bayesian networks and support vector machines. How times have changed since then!
Beyond the Hawkins-driven hype, I do think, however that it is worth attempting to model the brain more closely in ML. It may be that, practically, the whole HTM thing doesn't really work out -- but it will be interesting to see how it fails or succeeds as a research project.
Just like NNs in the 90's, there is this tendency to write it all off when some other technology does better, but I have a vague hope that at least some of the ideas will be more widely useful and that it's isn't a totally wasted effort.
I noticed that Jeff Hawkins quite frequently lacks details and scientific backing on his claims. I read his book "On Intelligence" and followed his talks and I have been inspired by them because his explanations are highly intuitive but when it comes to demonstrating the awesomeness of his approach it's less tangible.
I have chosen deep learning as the topic of my graduation thesis and lately and I had the opportunity to study the evolution of artificial neural networks. I think deep learning is having it's descending period on the hype curve in an interesting way, it is interesting because the prominent figures of the field are delivering it. If my memory serves me well, this is the third article brought to HN within last month on the topic of "deep learning does not represent how real brain works and is actually unsuitable for artificial general intelligence".
On one of his seminars Andrew Ng talks about "the algorithm": he pointed out that human brain may re-wire itself to handle different tasks, for example one part of the brain which is responsible for sight might take up the task for hearing. The idea of ANN undisputedly inspired from research investigating how brain actually works, and now it has set the state-of-the-art on many areas it has been applied, but to do this we had to resort to "ugly hacks" in the perspective of "the algorithm", to name some of them would be Autoencoders and Restricted Boltzmann Machines. It works, yes it works very well, but we had to pay a price of taking a detour from finding "the algorithm".
In my opinion, such banterings as in OP should not be taken as an aggressive stance against deep learning. Just because it is not "the" method doesn't mean it has no value whatsoever, but it has a point; after the limitations of deep learning is explored well, we will need a new idea to push the field further.
State-of-the-art results no longer require unsupervised pretraining with autoencoders or RBMs, but back when unsupervised pretraining was more popular, top researchers were rationalizing that it was consider more biologically plausible than the standard nets trained with back prop, since brains generalize through observing a large amount of data over their lifetime to quickly recognize new objects and since the nets aren't trained for a specific task, they would hopefully generalize better and be a step closer to general intelligence.
I met with the head of the Data Science institute at NYU a few years ago (not LeCun). I don't remember the specifics of our conversation, but I remember at one point I mentioned Jeff Hawkins (I had just read his book "On Intelligence") and I remember he responded with some comments that were politely dismissive of Hawkins.
Some of the other comments here seem to share the same sentiment. I really loved his book, but I haven't really followed his work for the past few years. Anyone know what he's been up to in ML/AI lately?
>The networks are not inefficient, they classify an image in 2ms and state of the art nets run real-time on your iPhone. Companies don't use "enormous numbers of servers" (I assume in training time?) to accomplish these tasks, they use a few dozen GPUs.
He's talking about the training time. Google Brain used 16,000 CPUs and had a training set of 10 million images back in 2012[1]. It is no doubt substantially bigger now.
Actually 2012 was when GPU training was just taking off. A team from the University of Toronto entered one of the larger competitions and won by a large margin by using GPU training. They used 2 GTX 580 over the course of 6 days to train their network on millions of images.
I am by no means an AI professional or neurobioligist, but I have read recently that a major problem with neural networks is that they do not have glial cells which, among other things, move synapses around and helps with synapse growth.
There are around 100 billion neurons in the brain and 100 trillion synapses. This indicates to me that synapses may be more important to biological neural networks than neurons are, and yet most ANN research has been focused exclusively on neurons (I once asked my teacher which neurons to connect to each other and recommended using a GA, which is not a bad idea but it indicates to me we have a very poor understanding of how synapses aught to work to make better ANNs.)
I disagree with the overly pessimistic view of this article however. To say that our standard neural networks are not nearly as complex as biological ones would be accurate, but this hardly means that ANNs are doomed forever, we just need more research into the things that matter.
Hawkins has no credibility in machine learning so I don't think people should really care about what he says. To the extent that anything he does say about the field makes sense, it isn't novel. Unfortunately our society reveres the super wealthy and treats them as experts on everything.
>Hawkins has no credibility in machine learning so I don't think people should really care about what he says.
Says the most respected name in the field of machine learning, Teodolfo.
Srsly. Why the hate Teo? He merely said exactly what everyone from Michael Jordan to Andrew Ng has said; There's nothing very 'neural' about ANNs.
ANNs work. No doubt. But they are limited in what they can do. Jeff is merely pitching a new approach that is intended to solve some of those limitations.
But since you don't care what he has to say, I doubt you have a very firm understanding of what he has said.
Put up or shut up, I say. If you have a model that works better, demonstrate it. ANNs simply crush every competitor in real-world performance.
Modeling brains is a cool idea but it doesn't have any practical advantages unless it improves learning performance or has medical applications. Neither is true of any ANN competitor so far.
Honestly, I think we're much more likely to eventually understand the brain by first focusing on building working AI systems of our own design and then comparing them to the brain (with the help of our AIs!), rather than reverse engineering the brain first and building systems that mimic it.