Did a PhD a few years back looking at the Applicaton of Machine Learning to Neurosignal Decoding (well, I started one anyway. Dropped out after a year for reasons the rest of this comment will explain).
Turns out researchers in this field are terrible at actually turning brainwaves into actionable data, but brilliant at using statistical trickery + general misunderstanding about AI to bullshit their way into getting grants.
Yes, there are a few actual exciting trials - like the work Shenoy did with the penetrative array and the robot arm, but the field as a whole is overwhelmingly bullshit.
This research in particular is worse than bullshit - it's borderline fraud. There is no "reconstruction" taking place. They're using ML to map the response from a test stimulus to some interpolated response from 50 training samples, arbitrarily assigning new sample images to these points in space and then passing it off as mind reading.
Same dumbfuck trick as the guy last year who pretended to be able to read what word people were thinking of using EEG, but with more layers of AI magic and crisp-looking images.
I mean, seriously, look at those images. They've somehow managed to recreate them with more clarity than the human subject's actual memory/imagination possibly could.
If there are any VCs or engineers here, please don't waste your time or money on this shit. If you do, I'll be sure to leave a rude comment on your HN article. :P
Ayyyyy fellow neurosignal hater, I wasted the first year of my PhD (out of 3) working on electroencephalography (EEG) signals, I'm forever bitter.
You see all these great papers on medical applications but no one tells you it's a high-dimensional signal that contains one bit or maybe a byte of data at best. So you can do binary classification, or one of many classification (poorly).
And that's all you see in all studies:
Claim: "Learn to predict behaviour from EEGs"
Actual: "We learn to tell whether the patient is moving in any way or completely still with 60% accuracy."
Claim: "Learn to reconstruct images from EEGs"
Actual: "1 in n classification of previously seen images, then we put a billion parameters on top to go from the binary vector to the mean of the training images with no actual information used from the original signal."
There should be the ability to grant-hunt. Proof your opponents work is bullshit and get the money assigned to your project. Make hunting research false claims and grant hacks profitable for actual researchers and experts. For example proof irrelevance of research or previous work and get a percent of the grants.
This is a really good idea. At my previous job I was handed a scientific paper to implement for an image classifying algorithm, hand picked by the team. Within a month the entire paper turned out to be fabrication, that's 160 man hours gone to waste. They still have the documents and the code as definite proof but it's worthless.
Do you agree that the 'consumer grade' devices that claim mind-control, and seem to work, actually respond to (facial) muscle movements - rendering them useless in (locked) in patients?
There are definitely external arrays that can work in locked in patients, but even when the stars align you'll usually be looking at something like 10-20 bits of data per minute.
For most people it'll be much less than that, and for some it won't work at all.
Generally speaking, these arrays are greatly outperformed by eye tracking, or even a nurse holding an A4 piece of paper and responding to blinks/grunts.
This in and of itself wouldn't be a problem, as the tech is in its infancy, but the real issue comes is that researchers are focused on bullshitting the public and selling science fiction rather than improving the existing technology.
Nobody understands how or why it works, and instead of learning about the brain we're just inventing new ML trickery.
I don't have enough experience with those to answer I'm afraid, I guess it'd depend on the number of electrodes and their placement.
The big problem with those recordings is the skull which acts as a low-pass filter over the signal that goes into your electrodes, more so than the equipment (within reason). So it could go both ways.
Yeah, I thought the same after seeing this. It's kind of a fun use-case of Diffusion models in this context, but as a scientific paper it seems too overselling. Well, it surely is the kind of clickbaity content to anticipate lots of retweets from.
I only skimmed the paper, but from what I understood, it is essentially a diffusion model pre-trained on a handful classes. The brain information is then used to largely "pick which class to generate a random image from".
The paper itself even picked the "better" examples. The supplemental materials show many more results, and many of them are just that, a randomly generated image of the same object class the person was seeing (or, the closest object class available in the training data).
"Reconstruct" seems a pretty bad word choice. I think the results are presented in a way vastly overselling what they actually do. But that's a worrisome trend in most of AI research recently, unfortunately.
(I have a PhD in a field of Applied Machine Learning. I work at a university in Computer Vision.)
VCs‘ due diligence is not “does it actually work” but “can it be pawned off to public coffers” (updating medicare “best practice” guidelines, inserting this tech into some gov procedure like maybe in criminal justice etc) so they might still be in the run, especially with bureaucrat-hypnotizing press-magnet “tech” like this. The money printer can keep up illusions of scientific progress for longer than we think.
> They're using ML to map the response from a test stimulus to some interpolated response from 50 training samples, arbitrarily assigning new sample images to these points in space and then passing it off as mind reading.
Could you clarify this? For example, if they trained the model on me visualizing a bear, a fish, and a bird, and then the neural net still outputs "horse" when I visualize a horse, I'd be impressed by this. And if they're going directly to images, I'll be more impressed if they get "black horse" and "brown horse" correctly. That's true even if the idea of reconstructing a specific image is kind of bullshit.
Is this what's happening? Or is their work just turning previously seen inputs into basically identical outputs?
> if they trained the model on me visualizing a bear, a fish, and a bird, and then the neural net still outputs "horse" when I visualize a horse
Well, the failure cases figure says it does not work if "training and testing datasets do not overlap". So, it'd just find the closest trained class and then generate new image from that class (i.e., in your example the bear might look more similar to horse than a fish or bird, so it'll generate a random bear).
> We assume the failure cases are related to two reasons. On one
hand, the GOD training set and testing set have no overlapping classes. That is to say, the model could learn the geometric information from
the training but cannot infer unseen classes in the testing set
Now, if all the GT results had been fails, it might be reasonable to conclude that it doesn't work if the sets don't overlap. However, there are only 6 that they graded as fails. (A few more look iffy to me.) If I'm reading their statement correctly, there was no overlap between the two sets:
> This dataset consists of 1250 natural images from 200 distinct classes from ImageNet, where 1200 images are used for training. The remaining 50 images from classes not present in the training data are used for testing
And if I'm understanding this correctly, that makes the results look sort of impressive. I mean, at the very least, the model is getting the right class from the testing set most of the time, even though that class wasn't in the training set. That's ... not ... nothing?
On the other hand it seems they cherry picked the best of five subjects for the results they show in the supplementary, which is ridiculous.
> Subject 3 has a significantly higher SNR than the others. A higher SNR leads to better performance in our experiments, which has also been shown in various literature.
Assuming these results hold, I wonder what the legal implications would be. In a criminal prosecution or investigation, can you force someone to just think about the question while stealing the image straight out of their skull? Even worse, what happens when its only 85% accurate but 100% convincing to juries.
In the US at least, I think 4th amendment protections obviously apply (though with this bench, don't count your precedents before they hatch). Elsewhere in the world, even fairly liberal freedom loving jurisdictions, the question is still up in the air imo. That should give everyone pause.
The information age began with a single question the answer to which is still unfolding: .-- .... .- - / .... .- ... / --. --- -.. / .-- .-. --- ..- --. .... - ..--..
Thanks for this comment, I also wasn´t aware of the statistics (also if I´m full aphant / aphantasiac / aphantastic [1]? or within the 3.9% with dim/vague mental images forming capacities). I also wasn´t aware that this is a somewhat new discovery, with research starting ~2015?
> ...in response to reading a frightening story and then viewing fear-inducing images found that participants with aphantasia, but not the general population, experienced a flat-line physiological response during the reading experiment, but found no difference in physiological responses between the groups when participants viewed fear-inducing images (wikipedia)
It seems that aphantasia is much more common among programmers. It must have something to do with the fact that autism and mathematical ability is correlated, or the third fact that autism and exotically expressed drawing abilities are correlated, too. But this is too complicated a tangle to draw simple conclusions.
That's an interesting question, because as far as I understand we still don't know if the issue is that people don't create the image in their minds, or that the imagined "image" is not perceived.
That being said, even with a perfectly applied as (most laypeople, not jurists/LE interpret it) perfectly applied, there is an involuntary aspect to what may be being envisaged which would render something like this extraordinarily invasive to.
I'm 100% certain this will be abused as early and often as possible.
I'm invoking the first ever telegraph message both to highlight the parallels of having a civilization changing technology whose implications are not yet grasped, and to highlight the telegraph as the start of encoding information digitally thus beginning the information age. The religious component of the quote is a historical artifact, not a literal endorsement of religion.
Given. I'm just using the opportunity to draw attention to the fact "I wish I could see what's going on in your head" is a uniquely human desire, not the least of which stems from the desire to control.
It is telling, that the figure who is theorized to have that insight "loves us by fiat" and is argued not to exist, or be attainable in that argued perfection by humans.
The end result hopefully being to inspire in the reader an unambiguous understanding that whatever becomes of this technology, it'll be humanity's problem foisted on itself.
We don't have a great track record in handling such things.
I would be more worried about it being used to reveal information than be used in court as evidence. If the trial is anywhere near fair it will be trivial to say you can never prove if it was a real memory.
> can you force someone to just think about the question while stealing the image straight out of their skull?
Can the person being mind read, if aware of the technology, force their mind to only output unrelated content (e.g. shrek fighting zombies) or vividly imagine the accuser committing heinous crimes?
...but if it's no longer restricted to the visual cortex and they can extract the kind of horrific imagery as in the movie, I don't really want to see it.
I really like this film, especially with my photography hat on. A lot of the imagery taken from the mind of the serial killer is actually based on surrealist art, and some of the cinematography is superb, e.g. the sequences filmed in the Namibian desert
Tarsem Singh's next movie The Fall is visually similar (though in another genre), so watch it too if you didn't already: https://www.imdb.com/title/tt0460791/
From the aesthetic/cinematography side of things, it did stick with me for a long time; I haven't re-watched it since the early naughties and I do remember lots of scenes. It is just hard to take in that some people might experience a similar internal imagery and the very slight possibility exists that they also act upon it.
I saw that episode and I hated it precisely because it didn't really explore the idea. There was this profound, interesting, thought provoking premise which it completely relegated to the background in favor of an unchallenging police procedural.
> Firstly, we learn an effective self-supervised representation of fMRI data using mask modeling in a large latent space inspired by the sparse coding of information in the primary visual cortex. Then by augmenting a latent diffusion model with double-conditioning, we show that MinD-Vis can reconstruct highly plausible images with semantically matching details from brain recordings using very few paired annotations.
For those of you that are overly concerned/excited about this keep in mind that fMRI scans are kind of looked down upon the neuroscience research community for the dubious misinterpretations from the findings (see dead Salmon fMRI study).
It looks like PET imaging is what many are focusing their research into for better insight / “dynamic analysis” of the brain.
I'm sorry, but that's utter nonsense: fMRI is a perfectly legit technique (and it's not exactly interchangeable with PET).
It is certainly true that there have been dubious and over-interpreted fMRI studies. As a technique, fMRI sits at a really awkward nexus where it is easy to do something but much harder to it well. Nevertheless, it can be done well. It just involves thinking hard about a lot of thorny problems, ranging from the signal itself (fMRI measures blood oxygenation, which is linked--indirectly--to neural activity), how it's measured, and how to analyze large volumes of data with unusual spatial and temporal correlation structures, plus the substantive research question on top!
The only people who systematically look down on fMRI are insecure first year grad students who think their work is more "science-y" because it involves lasers or whatever rather than asking people about feelings. I guarantee you that many other fields (cough single-cell seq cough) have similar statistical problems; it's just fMRI hit them first and hit them hard: salmon-like mistakes are rare these days.
As for PET, the ability to track things other than blood oxygenation is awesome, but the resolution is much worse and the radioactivity also limits its use. There is no magic tool; it's combinations that let us understand stuff.
I was stating what I perceived to be the general consensus among neuroscientists on fMRI (which btw makes the 'nonsense' labeling come off at least … unnecessary, if not rude) and since it seems PET imaging is on every study where various rates (glucose metabolism during RAPM testing is one iirc) are being measured during specific mental tasks subjects are submitted (or even studies on personality traits), I erroneously concluded fMRI had been 'grown out of' so to speak.
The “look down on it” bit hit a nerve because it is sort of true: there’s an archetype of bro-y “I do _real_ science” first year grad students who do think that. They (usually) grow out of it eventually, but usually not before demoralizing a classmate or two. I don’t think this is the general consensus among most neuroscientists though (I am one, but do neurophysiology rather than fMRI). Jack Gallant’s group, which I linked below, has produced a bunch of very smart people. There are some fMRI folks in the National Academy of Sciences, at the NIH, etc. It’s the real deal.
I’m curious where you’re seeing newer glucose PET stuff. We have a radiochemistry group downstairs and I thought most of the excitement was now custom tracers for specific receptors.
The signal measured by fMRI is definitely related to neural activity, but the relationship is indirect and sometimes subtly complicated.
Neurons compute and communicate electrochemically by controlling how ions (charged particles) flow across their membranes. At "rest", the inside of a neuron is slightly negative (- 70 mV) relative to its environment. However, its surface is studded with channels that let different types of ions into/out of the cell as well as "receptors" that can open or close these channels. Some are opened by specific chemicals, which can bring the cell's membrane potential closer to zero (e.g., by admitting sodium ions). Once it gets above about -55 mV, other channels that are sensitive to voltage take over, and produce a wave of activity that shoots down the axon and eventually causes the release of chemicals that affect other neurons, muscles, etc. This is what actually does work in the brain, but it's not what fMRI measures.
Instead, fMRI exploits the fact this process is energetically expensive: neurons need oxygen and sugar (glucose) to do all that work, and so neurons need to plumbed into the vascular system so those can be delivered. The signal fMRI measures is called the BOLD signal, for Blood Oxygenation Level Dependent. The magnetic properties of a blood molecule carrying oxygen are a bit different from a deoxygenated one, and thus they can be told apart with some clever physics and signal processing.
Based on what I said above, you might expect that fMRI thus measures the reduction in oxygenation, but it actually turns out to be a lot more complicated than that. Levels do dip when nearby neurons are active, but only briefly. After that, they shoot up, above baseline levels: more blood--and more oxygen--are delivered to the formerly-active area.
This is mostly what fMRI measures. The neuromuscular coupling seems to vary between brain states and maybe even brain areas; it's affected by different health conditions and drugs too. It's also limited by the structure of the vascular system. People occasionally find that veins appear to be heavily involved in cognitive tasks, sometimes even more than the brain itself, but that's just because they're delivering the blood.
Despite all this, the mechanisms are getting better understood and you can certainly interpret a BOLD change as suggesting that something is happening in that particular region, even if it's not clear exactly how it's implemented.
What is the meaning of this language in the "5. Results" section of the paper:
Our main results are based on GOD which has no overlapping classes in the training and testing set. The training and
testing were performed on the same subject, as individual differences remain a barrier when decoding at the group level [2,
16,21,32,34]. To compare with the literature, we report results
from Subject 3 here and leave other subjects in the Appendix.
What training and testing was performed on the same subject? Are they reading peoples minds or just overfitting a model on purpose? This papers seems to suggest an unbelievably implausible result, with absolutely stunning, society shattering implications, so I'm assuming it's either wrong or misleading, given how casually it's tossed on arxiv...
They could be overfitting somehow, but I don't see how you get that from the quoted paragraph. Of course they need to use the same subject (i.e. human brain) for training and testing. Otherwise one would need to assume that different people's brains represent visual concepts in the same way.
This is super cool, but it seems pretty far from "mind reading".
It's not reading what the person's brain is interpreting the image as, but rather what they're currently seeing. I wonder how different these signals are.
It's a pretty big difference since the only signal they're getting is what the eyes are currently seeing. Essentially they're using a $1million fMRI to poorly do the job that a $5 dollar camera could do.
This is still super cool, but not quite our wildest scifi dreams.
disclaimer, what I'm about to say is just vague recollection of content that had already been pre-digested into layperson science news. There's probably a ton of loss in this signal.
From what I understand, memory cells literally reproduce the original brain signal just as if it were coming from outside stimulus. Like playing the brain waves off a tape recorder.
>From what I understand, memory cells literally reproduce the original brain signal just as if it were coming from outside stimulus. Like playing the brain waves off a tape recorder.
I can't recall pictures of events that happened yesterday with anything resembling what it was like to be there and experience seeing the events in person. I can construct images of what things might've looked like, but the creative process of recalling that memory feels awfully different than experiencing events in real time.
There is certainly some evidence for "replay", but it's generally thought to be involved in "writing" it into longer-term storage rather than the storage mechanism per se.
OTOH, there are some cool results with Hopfield-like networks where applying part of a pattern (somehow) cues the brain (or network) to produce the rest of it.
This is not as advanced as some of you might think.
You have to understand how diffusion models work to "reconstruct" images.
You also have to understand that the model has to be trained to match pictures/brain scan for each individual.
But this kind of approach can certainly be useful, especially with people with motor impairment after a stroke or accidental brain damage.
For some reason having trouble getting the full study, anyway I'm no AI specialist so idk if reading it would answer my question, but : is this generalizeable? E.g. can they reconstruct and arbitrary visual stimulus, or just one that appeared as training data?
If this is a case where they showed a patient an image of a giraffe 1000 times and mapped their brain, generated output,then backpropogated, etc and then now they can recreate a decent image of a giraffe when shown 1 of the images from training data : This is not that impressive. I mean it's cool and interesting but doesn't sound remotely ground breaking, because it's not actually mapping arbitrary inputs to arbitrary outputs.
But if they can reproduce arbitrary visual stimuli this is pretty damn incredible, to me at least.
Yes, notice how much variance there is between the images. They are the same types of building, or similar sized animals, but the details and shapes can be wildly different.
It's pretty clear what's happening is FMRI meaning -> stable diffusion image generation. Now this _is_ cool, but it's important to understand all that's happening is the natural language input to stable diffusion has been replaced with FMRI input. i.e the image is still generated by stable diffusion, not you, you are just inputting meaning by visualising a separate image, and that meaning is abstract enough to be coarsely learned through an FMRI... but ultimately the image is not produced by your brain, which will probably be more obvious to any of the test subjects.
Yes, how do we know that visual rather than verbal information is even being processed? I can get a decent image from Stable Diffusion or Dall-E from a pure text prompt, and if that's added to a very basic scribble and we do img2img a much better result can be produced, but it might have little to do with what the person is actually seeing.
It's a good point, but actually a fairly simple one to test: try the same model on two different test subjects with no overlapping languages (preferably with very different languages that don't share many constructs).
> You also have to understand that the model has to be trained to match pictures/brain scan for each individual.
this was the impression I had and I think some of the commenters here should note this, assuming I am understanding correctly:
this technique only works on a training set that comes from the same person. that is, you can't take model you trained on one person and apply it to someone else, their fMRI patterns would be completely different. Since it's just lining up pre-canned photos to various blotches and patterns seen in the scan.
That's what it looked like to me, anyway, if I'm getting it wrong please clarify.
Yeah - clearly they have some dimensionality reduced classifier for what's in the underlying mri data, but using stable diffusion to project it just seems odd.
"The MRI of your brain tells us you're looking at a building with a tower of some sort, so here's an artist's impression of a building with a tower"
1) Each person's brain signals have to be trained separately -- there is no universal model
2) The set of trained images is small (e.g. 50) so five bits of information is enough to specify which image the person is thinking of... it is not really the AI imagining the image from scratch but an interpolated set of those 50 images diffused together.
If my understanding is correct, the use case of this is very limited if the bits retrieved is really around five. Underwhelming.
Some of these results seem to be picking up on general abstract concepts moreso actual visual cues. for example, one sample has bright colored wooden park benches as input, and some generic brown wood furniture as an output. Perhaps a strange artifact of training biases, but a little disturbing at a glance. Extracting current visual data out of one's brain is one thing, but being able to potentially identify abstract thoughts is a whole different level of invasiveness.
So there’s this philosopher Daniel Dennett. In broad strokes, he’s generally liked by philosophically minded scientists and disliked by the philosophy community. His position is that we will slowly chip away at the magic of consciousness as we get better and better tools to scientifically explore the brain. The consciousness is not some magical force that enters us but just a giant bag of tricks that our brain plays. Furthermore, that the “hard problem of consciousness”, i.e. “what it is like” to be something is not immune to the scientific process.
I’ve always been defferential to his viewpoint, but this is the first time his arguments have felt justified.
It’s surprising but in hindsight makes sense to me that when you look at a specific representation of an object that your brain internally kind of “draws a picture” of an abstract representation of that kind of object. You can probably think of the location of 100 trees, but it’s pretty ridiculous to think that your brain actually has pictures of 100 trees in 100 different drawers. You look at a tree and your brain draws a kindergarten picture of a tree. Here lay the platonic forms?
> The consciousness is not some magical force that enters us but just a giant bag of tricks that our brain plays.
That's kind of a straw man of what Dennett's opponents argue. No philosopher seriously says that consciousness is a magical force entering the brain. Rather, they make arguments that materialism (physicalism) fails to account for consciousness, whatever that entails for metaphysics or epistemology. Some are naturalists, but not physicalists. Some are idealists. Some like David Chalmers propose a kind of property dualism. Others might be neutral monists or think we're cognitively closed to the solution. Or panpsychists were everything has a bit of consciousness.
Materialism is a metaphysical assumption. We don't know the truth of what reality fundamentally is (it could be a simulation for all we know). We also don't know how to bridge the subjective/objective divide, as Thomas Nagel has argued. Another way of framing the argument is that the objective world is an abstraction across many subjective experiences, where consciousness is abstracted in favor of mathematical forms. That works well for describing/predicting patterns in nature we observe, including the brain, but then when you turn it around to explain the mind, it runs into difficulties, since the abstracted-away subjectivity has been left out.
There's no need to muddy the waters with the word "magic". We don't know everything, and some things we might never know. We do know that we have experiences of color, sound, taste, smells, feels, dreams, inner dialog and what not. That's not magic, it's just how we experience being in the world, whatever the world and our experiences are.
I love philosophy for the fun discussion, but when folks say there is more than Physics to something when we know so little about everything. Yes it could be true, but coming from someone who is already so unskilled in the sciences, they have no real basis to make those claims.
It is like someone making conjectures about the properties of prime numbers while knowing almost no mathematics.
No different than claiming everything can be explained by physics. The basic argument against that for consciousness is that you don't get sensation (colors, tastes, feels) from physical properties, because they lack those sensations. Physical meaning our science of how the world works, which is abstract and mathematical, and not the world itself, to avoid metaphysical assumptions/circularity.
> consciousness is that you don't get sensation (colors, tastes, feels) from physical properties
There is no unexplained magic here, are you saying human experience of the world breaks cause and effect? Color is our interpretation of the frequency of light, taste the chemical composition, if by feel you mean touch, our skin can resolve nm scale bumps.
To say something is un-explainable is a cop out. Esp for the folks that say, "there has to be more" without showing why is unscientific.
While I am certain there is nothing (too) special about consciousness, it still is a mixture of a large number of inputs (nerves, senses) and complex processing machinery dealing with constant chemical processes, some of which are slow to respond to chemical changes (like some hormones like TSH which take a few weeks to "stabilize" provided stable FT3/FT4 hormones). Finally, add all of those throughout an entire life up to that point.
Basically, we'll make some statistical conclusions, but we won't be able to get fully repeatable nor even measurable conditions to fully understand it ever. Perhaps if we go full Nazi again and experiment on live, unwilling participants.
Lest people forget, even with the rest of medicine and much simpler organs we have directly poked and prodded, we are still far away from fully understanding them, and the best of our knowledge borders on "in 90% of cases".
If we put aside questions on how to collect the data, does this not suggest that we would be able to "read minds" (intentionally left vague) with an FMRI?
I think it will be similar to how diffusion-based image generation currently relies on what the model is trained on.
This might work well for average folks, but for anyone sufficiently out of the norm / with expertise/mental hardware outside of the bounds of the trained model, the results will be subpar / useless.
In my slightly uneducated opinion, this will only be possible once there’s solid understanding of how the human brain works, which if you think about it (ha!) has much lot more exciting implications apart from mind reading, it could even be a step closer to or the resolution to AGI.
Besides, for a large part of the 8 billions I would suppose Google search history about personal struggles is pretty damn close to mind reading.
...In a world in which trends are ghastly, also in the territories most advanced and old cultures. I witnessed some new latest attacks to Civilization and Dignity only in the last 24 hours (and they were also about intrusion).
> 141 – the number of countries in which Amnesty has reported on torture or other ill-treatment in the past five years. In some countries it’s a rarity, in others it’s widespread
We hope that with «here we are» you did not mean "you personally do not see it in the street".
--
¹In which you will by the way immediately notice on the maps that India, 1.4 Bil ppl (current top populated country or reaching it), has not signed the Convention against Torture. In local legislation, it is punished as violence but «the offense attracts no particular relevance if the crime is committed by a police officer» (see http://www.humanrights.asia/tortures/torture-in-india/ ). Recent examples: 2020, Covid lockdowns:
> By now, everyone has heard of the tragic deaths of P. Jayaraj and J. Benicks , a father-son duo in a small town in Thoothukudi. Jayaraj, 58, was arrested by the police following an altercation with them on keeping his son’s mobile phone shop open in violation of lockdown rules. After Benicks was also taken into custody, the two were mercilessly thrashed to death. // Being found guilty of the ‘offence’ of keeping a shop open during the lockdown [...]
> The Tamil Nadu Police has acquired notoriety over the decades for employing torturous methods for law enforcement. During my tenure as Chief Justice of the Madras High Court, several cases in this regard were brought to the court. But this issue is not restricted to Tamil Nadu alone. Torture is, in fact, an integral part of police culture all over the country. Indeed, it would not be amiss to argue that this culture in India today is reminiscent of the brutality of the colonial police forces that we are so keen to forget [...] The data on torture show that it is not only an integral part of India’s policing culture; in some investigations (such as terror cases), it is treated as the centrepiece
I'd really like to see the mouse (or other nonhuman animal) version of this experiment.
In the human experiment we can see how the input and output represent the same categories of object. I think it would be really interesting to what extent a mouse's perceptions can be mapped onto human categories such as "fire truck" or "baseball game."
I don't think being able to scan this is completely new. Perhaps this specific technique is, but I've heard of this being done at least once previously in the past decade.
Now we just need a way to encrypt our brain waves lol. I’m reminded of the book by Haruki Murakami: Hard boiled wonderland and the end of the world and the Keanu Reeves film, Johnny Mnemonic
Jack Gallant's group at UC Berkley has done a lot of interesting fMRI decoding. The approach in their earlier papers (e.g., Kay et al, 2008 [0]) is neat because it uses some knowledge of how early visual areas represent information, rather than just flinging a decoder at it. They later extended this to movies (Nishimoto et al., 2011 [1]), and have subsequently moved a bit away from sensory information to "semantic" decoding. Jack has a TED talk about this [2] and Kendrick Kay has written some News and Views/Perspective pieces [e.g., 3] that might be interesting too, as it links to other groups' work.
I like their approach because it ties into neuroscience rather than just throwing ML at the wall and seeing what sticks, but there's also a huge cottage industry of "Blackbox ML" + MRI to do decoding. My sense is that this has moved slightly out of the limelight and into the stage where people incrementally improve it with newer ML techniques like GANs [4] and diffusion (this link), but it might just be me.
For sleep specifically, you might be interested in Horikawa et al. (2013)[5], who could predict words associated with dream contents (e.g., "Car" but didn't try to reconstruct them). There's also a very short perspective on it [6].
[0] Kay, K., Naselaris, T., Prenger, R. et al. Identifying natural images from human brain activity. Nature 452, 352–355 (2008). https://doi.org/10.1038/nature06713
[1] Nishimoto, S., Vu, A. T., Naselaris, T., Benjamini, Y., Yu, B., & Gallant, J. L. (2011). Reconstructing visual experiences from brain activity evoked by natural movies. Current biology : CB, 21(19), 1641–1646. https://doi.org/10.1016/j.cub.2011.08.031
[4] Seeliger, K., Güçlü, U., Ambrogioni, L., Güçlütürk, Y., & van Gerven, M. A. J. (2018). Generative adversarial networks for reconstructing natural images from brain activity. NeuroImage, 181, 775–785. https://doi.org/10.1016/j.neuroimage.2018.07.043
[5] Horikawa, T., Tamaki, M., Miyawaki, Y., & Kamitani, Y. (2013). Neural decoding of visual imagery during sleep. Science (New York, N.Y.), 340(6132), 639–642. https://doi.org/10.1126/science.1234330
All the people who celebrate Progress like this for Progress' sake never seem to be able to answer questions like "what will prevent the exploitation of this technology for authoritarian surveillance / capitalist profit / progressive thought policing."
Your entire industry is built on ignoring that question.
This fatalistic attitude leads to the sad conclusion that nobody should ever express new ideas for fear they might be misused. Or maybe you can express new ideas only after you have conclusively proven that they can’t be misused. So… never.
Having an idea and suppressing it for fear of it being misused also doesn’t prevent anyone else from having the same idea. So what have you actually accomplished besides depriving yourself of your own ideas? Delaying the inevitable?
Turns out researchers in this field are terrible at actually turning brainwaves into actionable data, but brilliant at using statistical trickery + general misunderstanding about AI to bullshit their way into getting grants.
Yes, there are a few actual exciting trials - like the work Shenoy did with the penetrative array and the robot arm, but the field as a whole is overwhelmingly bullshit.
This research in particular is worse than bullshit - it's borderline fraud. There is no "reconstruction" taking place. They're using ML to map the response from a test stimulus to some interpolated response from 50 training samples, arbitrarily assigning new sample images to these points in space and then passing it off as mind reading.
Same dumbfuck trick as the guy last year who pretended to be able to read what word people were thinking of using EEG, but with more layers of AI magic and crisp-looking images.
I mean, seriously, look at those images. They've somehow managed to recreate them with more clarity than the human subject's actual memory/imagination possibly could.
If there are any VCs or engineers here, please don't waste your time or money on this shit. If you do, I'll be sure to leave a rude comment on your HN article. :P