Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Nematode Neural System Uploaded to a Computer and Trained to Balance a Pole (tuwien.ac.at)
142 points by r4um on Feb 10, 2018 | hide | past | favorite | 37 comments


I first learned about C. elegans neuron-mapping project from this Society of Mind video:

https://www.youtube.com/watch?v=6Px0livk6m8

My immediate interest was in seeing the differences and similarities between real and simulated worm. I haven't spent much time searching for resulting papers, but it's been 7 years since then and I'm not aware of any ground-breaking publications on the subject.

Unless I'm missing something massive, describing this paper as training the worm to "balance a pole at the tip of its tail" is highly misleading.

In this paper researchers use an external algorithm to tweak parameters of a part of worm's neural model until that part can perform a certain task. The neural circuit effectively serves as a controller for a mechanism that has nothing to do with the original worm. The task, the setup, the subset of the model and the training algorithms are all chosen by the researchers.


From the summary, they’re not altering the topology of the network at all, just the connection strength between neurons.

While this isn’t the way a natural neural network would learn (I’m not sure a nematode can learn), it’s still interesting that you can take a copy of a natural neural network and force it to learn in this way.


I would expect there is an enormous set of random topologies that could be trained to balance a pole. Indeed, part of the elegant 'magic' of neural nets is that the topology is fairly irrelevant... the number of layers, the number of nodes, the manner in which they are connected... pretty much any configuration can get you into a mid 90% accuracy on MNIST (and emulating a basic PID algorithm is simpler than MNIST). Of course, I'm referring to basic tasks; clearly topologies matter a great deal with more sophisticated things.


I think their idea is to use biologically inspired network architectures. Looking at Fig1 of the paper[1], it seems they have drawn the schematic in an overly complicated way...

For example, FWD and REV motor neurons are totally determined by AVB and AVA sensory neurons so can be left out. I would bet if it is worked out this reduces to some simple ANN architecture.

[1] https://docs.google.com/viewer?a=v&pid=sites&srcid=ZGVmYXVsd...


>I think their idea is to use biologically inspired network architectures.

It seems that way. This approach could have interesting applications in engineering. But I wish it wasn't pitched to imply that "machine learning and the activity of our brain the same on a fundamental level". (This is a direct quote from the article, except it was posed as a rhetorical question there.)


I vaguely recall that experts consider artificial neural networks to be a very gross approximation of biological ones. They often state that one reason we don't have AI today is that we don't really know how the brain and the neurons it is made of work.

Then I wonder : how does openworm deal with that lack of knowledge? Is there any chance progress in modeling C. Elegans could be used to improve machine learning?


ANNs aren't supposed to model biological neurons, they're more inspired by them. We actually know a great deal about how real neurons work, although the emergent effects keep us from learning much about the brain as a whole.

But modeling biological neurons is expensive; they're far more complex than a logistic function. Just modeling a worm is taking them an entire cluster, and they appear to be using a pretty simple model (the Hodgkin–Huxley equations way back from 1952). I don't think we'll ever simulate real neurons to detect cat photos. Instead, modeling entire brains might allow us to see how learning happens, which could be simplified into new algorithms for machine learning.


We believe that the underlying implementation (biological neuron vs logistic function) doesn’t matter as long as it has the fundamental properties from which intelligence can emerge. Like how you can build computers from gears, relays, vacuum tubes or silicon - as long as you have a transistor-like nonlinearity to work with, you’re good.


From what I understand, the complication is that each neurone acts behaves like a complex neural network by itself, so to get the same overall behavior, you would need many more fundamental units.


It could be other way around. Evolution invented imprecise and roundabout way of doing gradient descent.


There’s a lot of work in artificial gene regulatory networks that suggest that the network topology not the implementation details is responsible for function. I think we learn more with a simplified model and then figure out why we are falling short of our expectations.


It’ll become interesting when we can teach the worm before “uploading”it, and the resulting NN already knows how to balance that pole without any further training. As is, The article sounds underwhelming


The worm isn't "uploaded". Scientists painstakingly mapped out the physical structure of its neurons using some genetic engineering and lasers. This structure does not change from worm to worm.


I'm clear. That didn't prevent the authors from using that term anyway. And in their title, just to make sure not to confuse anyone.

EDIT : the author(s) of the press release used the word "upload". That term is nowhere to be found in the original paper as far as I can tell.


Wait until we can 3D print a C. Elegans. A new kind of matrix printer.


Maybe The Great Filter (in the context of the Fermi paradox) is this: People printing their own organisms? (only half-joking)


More likely people uploading themselves into a virtual world, securing the planetary data center and dialating time to feel like everyone lives forever in the manner of their choosing.


There’s something sad about a civilization closing its eyes and turning inward on itself instead of exploring the vast universe that it has been presented with.


“Conquering the galaxy is what bacteria with spaceships would do—knowing no better, having no choice.”

From Diaspora, by Greg Egan :)


the universe may be vast, but it's pretty empty.


What makes you think you can simulate an atom faster than an atom does?


The theory is you wouldn't need to simulate every atom any more than you do in a video game.


We're in a simulation.


A very well written article, simple to understand and short!


Neural network balances pole while programmers mumble something vague about C. Elegans. Non-story.


This uses a more accurate neuron model than your everyday neural networks.


Nevertheless, Cartpole is a trivial RL task which only requires a few neurons to solve, much less hundreds. Since they're training it in silico, I'm not sure what this demonstrates other than you can cudgel a vaguely spiking-like neural network into learning a simple task. It's not a C. elegans behavior.


Cartpole solved with 1 neuron, no weights.

https://gym.openai.com/evaluations/eval_A7rFUDisQiOsADyvqYhV...


I rest my case. :)


This is very close to a P controller (like PID, but without the integral and derivative terms).


It's a more accurate neuron model of some pretty weird neurons. In nearly all organisms, the vast majority of neurons fire all-or-nothing action potentials ("spikes"). C. elegans neurons do not.


What a confusing title.

Tldr; NN based on real life nematode learned a simple thing.


It's because "Worm Uploaded to a Computer" primes the reader to expect something about malware, but then "Trained to Balance a Pole" completely violates that expectation.


This is why I always read the HN comments first. Thank you.


Me too, I tend to decide whether to read an article based on the comments. I find I hardly ever read an article.


Thanks, we've updated the headline to clarify.


"Uploaded" no, copied into a perfect or near-perfect simulation.

A perfect copy of an organism is still not the organism, but a copy. Just like your twin brother is not you. You won't live forever in a computer.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: