Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I first learned about C. elegans neuron-mapping project from this Society of Mind video:

https://www.youtube.com/watch?v=6Px0livk6m8

My immediate interest was in seeing the differences and similarities between real and simulated worm. I haven't spent much time searching for resulting papers, but it's been 7 years since then and I'm not aware of any ground-breaking publications on the subject.

Unless I'm missing something massive, describing this paper as training the worm to "balance a pole at the tip of its tail" is highly misleading.

In this paper researchers use an external algorithm to tweak parameters of a part of worm's neural model until that part can perform a certain task. The neural circuit effectively serves as a controller for a mechanism that has nothing to do with the original worm. The task, the setup, the subset of the model and the training algorithms are all chosen by the researchers.



From the summary, they’re not altering the topology of the network at all, just the connection strength between neurons.

While this isn’t the way a natural neural network would learn (I’m not sure a nematode can learn), it’s still interesting that you can take a copy of a natural neural network and force it to learn in this way.


I would expect there is an enormous set of random topologies that could be trained to balance a pole. Indeed, part of the elegant 'magic' of neural nets is that the topology is fairly irrelevant... the number of layers, the number of nodes, the manner in which they are connected... pretty much any configuration can get you into a mid 90% accuracy on MNIST (and emulating a basic PID algorithm is simpler than MNIST). Of course, I'm referring to basic tasks; clearly topologies matter a great deal with more sophisticated things.


I think their idea is to use biologically inspired network architectures. Looking at Fig1 of the paper[1], it seems they have drawn the schematic in an overly complicated way...

For example, FWD and REV motor neurons are totally determined by AVB and AVA sensory neurons so can be left out. I would bet if it is worked out this reduces to some simple ANN architecture.

[1] https://docs.google.com/viewer?a=v&pid=sites&srcid=ZGVmYXVsd...


>I think their idea is to use biologically inspired network architectures.

It seems that way. This approach could have interesting applications in engineering. But I wish it wasn't pitched to imply that "machine learning and the activity of our brain the same on a fundamental level". (This is a direct quote from the article, except it was posed as a rhetorical question there.)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: