Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
The Computational Theory of Mind (2015) (stanford.edu)
148 points by DanielleMolloy on Dec 18, 2019 | hide | past | favorite | 90 comments


It seems like basic materialism to claim that the brain could be simulated by a large enough Turing machine. But that claim would not necessarily imply that a Turing machine is a useful model for brain. A Turing machine isn't necessarily even used as the working model of a computer though it conceivably could be.

Edit: My comment is basically asking what saying "the mind is a Turing machine" really means. It seems like the main implication someone would take away from that statement is that it's reasonably practical to describe mental processes with a computer but given that the question is formulated as a binary is/is-not. The mind "is" a Turing machine given ordinary physics which I believe say everything can be approximated by a large enough Turing machine. But this sense of "is" doesn't imply the Turing machine mind model is useful.


Nobody claims that Turing machines in particular are well-suited to the task. What we claim is something more open-ended: Turing-complete languages can simulate each other given a universal encoding (a program we typically call an "emulator") and the simulation is only polynomially slower and larger.

To the details: We don't want just one tape on the Turing machine. It's fine but slow, like using unary Church numerals instead of binary Cantor numerals. We usually assume at least two tapes. Similarly, we often ignore tapes and use a random-access memory instead.

Edit: I guess that this is how we're talking today? From the article:

> It is common to summarize CCTM through the slogan “the mind is a Turing machine”. This slogan is also somewhat misleading, because no one regards Turing’s precise formalism as a plausible model of mental activity. The formalism seems too restrictive in several ways.

The article then lists senses, finite memory, concurrency, and determinism as four ways in which the brain might differ from idealized Turing machines.


Finite memory just means a limited subset of what a Turing machine can do.

Concurrency doesn't make Turing computable processes capable of anything beyond what they can already do. At best it buys them speed, maybe some expressiveness for the programmer, and bugs.

Nondeterminism is useful for some algorithms but it also doesn't make it possible for Turing computable processes to go beyond Turing computability. See for example https://arxiv.org/pdf/cs/0401019.pdf.

The most interesting item in that list would be the senses of organisms, as some of those might indeed involve processes that are not computable.

Also, some aspects of the brain's activity may not be computable. See Roger Penrose's books for details, such as p. 377 of Shadows of the Mind (a section about possible noncomputability in some physical processes).


Nobody claims that Turing machines in particular are well-suited to the task. What we claim is something more open-ended: Turing-complete languages can simulate each other given a universal encoding (a program we typically call an "emulator") and the simulation is only polynomially slower and larger.

Well, that's the Church-Turing Thesis, which most people accept but isn't being discussed by me or the OP - the topic being "theories of the mind", I think.


I'm still reading through the paper/article, but a Turing machine would be the substrate in this case, not the model. I think the question is whether we can model the brain's way of computing in a Turing machine, and then it will follow we can also model the brain. Turing machines are a model of computation, and if the brain's computational model is Turing complete, it will follow that we can simulate a brain inside a Turing machine.


> if the brain's computational model is Turing complete, it will follow that we can simulate a brain inside a Turing machine

I don't believe this is correct. If the brain's model/the brain itself is Turing-complete, that doesn't mean a Turing machine can simulate; it means that the brain's model can simulate any Turing machine qua Turing machine. This does not imply the reverse, that any Turing machine can simulate it.


I think a Turing Machine is an infinite state automaton with the capability that you can remotely access state information. [1]

Maybe someone can clarify whether this is correct.

If you phrase it that way, then it seems more intuitive that the brain could be described as a Turing machine. The definition of a Turing machine is deceivingly simple and cause us to question what we perceive of as complex. The transition function (and the tape) actually make magic happen if you compare it to any "dumb" mechanical or electrical device.


This is correct w.r.t the formalization of Minsky.

Further, there is a conflation among models and implementation from the parent commenter. Minsky propounds in his automaton theory that a machine may be implemented in any way and it is only its formal history of signals and responses to states which determine the machine. In this sense a biological machine made of organic bases can be considered the same to that of gears and another of semi-conducting transistors assuming their states are described by the same histories.

More so, in the same automaton theory any environment which interacts with a machine in itself must be a machine by symmetry. The classification of each is arbitrary for consideration of the states you’re interested in.

The author of the article remains correct as we must be able to model at least one faculty of the mind as Turing complete because we can also compute by the nature of computation itself. This remains useful as then we can project the guarantees of a Universal Turing machine onto that specific faculty of the mind which is Turing complete. This then allows us to explore those novel guarantees of the mind, those which are also of the Universal Turing Machine since by that faculty having the properties of UTM, it is to be considered a UTM, as opposed to having no guarantees.



It might seem like basic materialism, but a lot of philosophers suddenly loose their simple and clear-cut ontological stance as soon as the mind gets involved.


>> Edit: My comment is basically asking what saying "the mind is a Turing machine" really means.

The article clarifies that "the mind is a Turing machine" is not what the Classic Computational Theory of Mind claims. See section 3 "The classical computational theory of mind":

https://plato.stanford.edu/entries/computational-mind/#ClaCo...

To summarise, the article suggests that, while: "It is common to describe CCTM as embodying “the computer metaphor”. This description is doubly misleading." (first because nobody claims that the mind is a computer and second because the comparison to Turing machines is not a metaphor).

The article concludes that: "CCTM claims that mental activity is “Turing-style computation” [snipped for relevance]".

In short, nobody really claims that the mind is a computer, a digital computer, a Turing machine, a metaphorical Turing machine, etc etc. The idea, if I'm not misrepresenting it myself, is that the mind can think like Turing machines can compute.


> Edit: My comment is basically asking what saying "the mind is a Turing machine" really means.

It means that if we internalise this idea we can use all the language we have developed to say precise things about computers/software/hardware to also say precise things about our minds.

After all “computer” was a human job description...

All descriptions (theories) are, fundamentally symbolic/linguistic.


Are you absolutely convinced that you can build a sentient mind out of clockwork parts?

Because that's what your interpretation of basic materialism implies.


Clockwork parts? No. Parts that select from a combination of inputs? Yes.

Computability theory demands repeated combination/selection and memory. It doesn’t otherwise care how this is accomplished.


In principle? No question.

In practice? No chance.


How would a clockwork model of the mind explain the phenomenon of consciousness (as opposed to computation)?


The idea is that consciousness is what a planning algorithm feels like from the inside. A planning algorithm requires representations of the entities relevant to planning, e.g. various objects in the world, relevant laws, internal states such as emotions, motivations, etc. It is these representations from an internal perspective that have something it is like to have them. But the fact that this planning algorithm is constructed out of mechanical components is irrelevant.

Computation is just an abstract description of the behavior of some classes of systems. But it doesn't exhaust the set of possible descriptions for a system. Some subset of computational systems might also have a description in terms of consciousness.


Your apparent consciousness is the illusion of control over your subconscious mind.


Your comment makes more sense if we replace “useful” with “practical”


>> However, the mechanisms that connectionists usually propose for implementing memory are not plausible. Existing proposals are mainly variants upon a single idea: a recurrent neural network that allows reverberating activity to travel around a loop (Elman 1990). There are many reasons why the reverberatory loop model is hopeless as a theory of long-term memory. For example, noise in the nervous system ensures that signals would rapidly degrade in a few minutes. Implementationist connectionists have thus far offered no plausible model of read/write memory. [4.3 Systematicity and Productivity]

I wonder: is this information outdated?

A Neural Turing Machine (first described in 2014 by Alex Graves) is a recurrent neural network architecture with an external memory store. Reads and writes from and to the memory are controlled by an attention mechanism. A newer version is the Differential Neural Computer (first described in 2016, also by Graves).

The setup is not fundamentally different to the Elman networks or Long-Short Term Memory networks other than the mechanism by which "memory" is manipulated and storage, retrieval or discarding of "memories" is decided, although the mechanisms are very similar too (for instance, in LSTMs, you could say that training a network to decide when to "recall" a weight value is essentially similar to the "attention" mechanism).

Is there a significant difference between an LSTM-based neural architecture with a "reverberatory" memory and one with an external storage, both controlled by similar mechanisms?

I would say- yes.


In a different lifetime I got to take a few classes with Jerry Fodor, and although this publication references him extensively, one of his most succinct arguments for the computational theory of mind is only alluded to: the lack of alternatives.

Fodor was a snappy writer and talker. I urge you to view his videos if they can be found on YouTube. Unfortunately he passed away recently.

The argument goes something like this:

1. The computational theory of mind is the only remotely plausible theory of mind we have

2. A remotely plausible theory is better than none at all


The problem with that argument is that it "the computational theory of mind" isn't a "theory of mind" -- so I don't think we have any.

A TOM needs to explain "mental life", at best the CTOM provides a model of a very narrow sort of cognition (inference over propositions).

There's a gigantic (and in my view, deeply implausible) leap from "hey this kinda works for modelling inference in animals" to "hey this is how The Mind! works".

Not only do all CTOM models fail for actual inference in animals where we can be reasonably sure inference is taking place (due to the frame problem), it clearly fails for non-inferential processes and states (eg., emotions/environmental-action/...).

These non-inferential processes are regarded by CTOMists as "black boxes" that just "plug into" the "Real Mind" (ie., inference over propositions).

I dont think Fodor's argument holds here: it is tantamount to saying, "hey we've explained light with waves, why dont we just explain everything with waves!" -- the cost to that approach should be obvious.

The problem this field has is that its being lead by computer scientists not neurobiologists. You ask a computer scientist what the right model of anything is, and they'd reply with a logic.

We do not, however, model causal reality with logic. Temperature isnt computed from a 'logic of molecule motion', it is done via a causal model which relates causal variables to one another.


You’re right in the sense that the CCTM (classical computational theory of mind) isn’t a complete theory. There are lot of problems. Studying it is essentially memorizing all the problems and the problems with various proposed solutions.

As a theory, not only is it incomplete, it is only remotely plausible. He concedes that up front!

It’s like the old joke about capitalism - worst but for any other.

I would expect a response to give a better alternative, not to lay out admittedly deep problems with it.


That's an argument for a research paradigm, its not an argument for the truth.

If there are fatal problems with Theory-A, Theory-A isn't true. No matter how much Theory-A might help with some other problem you have.

The CCTM is the research paradigm for modern AI, parts of cognitive science, etc. -- and insofar as it provides a clear set of assumptions to arrive at useful models, so be it.

Alexa can turn the lights on, for sure. She may even be able to reason a little (if-then-else, etc.). I doubt she will ever know what a "light" is, or what she is doing when she turns it "on".

That would require Alexa to have lived a human life, and to have lived in a deep and complex social/physical environment. There is no "logic" which can specify such things in a limited set of propositions: the effect of the world on animals is not merely to add propositions to their "set of beliefs".

Rather, animals are first traumatised by the world: their emotions, physical memory, instincts, etc. are all unmindfully coerced by their environments. Only with a peculiar sort of frontal lobe are those things expressible as propositions -- but they arent propositions, as evidenced by the infinite number of them required to capture the effects.

What we need before understanding inference, is to understand on what inference operates: the mental life created by the effect of the world on the whole mind of the animal.


> I doubt she will ever know what a "light" is, or what she is doing when she turns it "on".

You mean Alexa may never know what we mean by "light" or "turning it on". Neither would an intelligent alien that doesn't rely on sight. That doesn't entail that such a creature isn't intelligent, or doesn't have a mental life, or its operations doesn't operate on a model consisting of a set of propositions.

> There is no "logic" which can specify such things in a limited set of propositions: the effect of the world on animals is not merely to add propositions to their "set of beliefs".

That's conjecture, although I think the way you've framed it is misleading. Instincts are also "beliefs" in this model, and the operation of a mind can have multiple layers with inconsistent sets of "beliefs" that sometimes drive seemingly inconsistent behaviour.


> An intelligent alien that doesn't rely on sight. You don't have to go that far. A person blind from birth is enough an example.


I disagree that a problem with theory-A means theory-A is not "true." Our favorite theories: say relativity or QM have plenty of problems but we still work on them.

But your argument with Alexa is in my view in the wrong "direction." Alexa doesn't know what it means to be a light, and perhaps a computer "never" will.

But the real question is, how do humans "know what a light is", or what do you mean when you say a human knows what a light is.

My intuition is similar to yours, that our living in a "deep and complex" environment has something to do with it, but what?

The deep and complex environment might explain how we learn what a light is, but what is it? To put in Fodor's terms, what is the representation?

When you or I "think" (what is thinking?) of the light (what does it mean to think of the light?), what is going on in our heads?

I suspect whatever theory of representation you come up with will look something like a computational theory. The notion or concept of the light will be "stored" and have "relationships" and so forth.

edit: readability


> I disagree that a problem with theory-A means theory-A is not "true." Our favorite theories: say relativity or QM have plenty of problems but we still work on them.

I think you're actually agreeing here with mjburgess' point that therefore theory-A should be taken as the basis for a research paradigm.


He is fantastic, I've only been able to find a couple of videos of him on Youtube. If you have any links at all, I would really really appreciate the share, even just plain old audio recordings would be awesome. Thanks!


I also rely on youtube. If I knew then what I knew now I would have been more diligent in recording what he said.


Yeah, a great loss to Cog Sci, I love his humour too


Please let me apologize for the lack of understanding, maybe someone can add to this: If biological intelligence works like a Turing machine, is it even possible to know what or how certain brain areas compute without simulating them bottom-up?

Question is influenced by this idea originally from the 80s: https://en.m.wikipedia.org/wiki/Computational_irreducibility


>biological intelligence works like a Turing machine

We know it doesn't. There's no metaphorical linear tape in the brain; it's a network of neurons. But a Turing machine can simulate a network of neurons (see Machine Learning), just as a brain can model a Turing machine (see Programmer). There are currently things a brain can think about that a Turing machine cannot, but the question is whether that will continue to be true despite the steady advance of (computer) science.


Machine learning doesn't "simulate a network of neurons". I think by "machine learning" you mean "artificial neural networks" but (artificial) neural networks also do not simulate a network of (I assume you mean:) biological neurons.

The article above has a good summary of the problems with the idea of neural networks as simulations of biological neural networks:

These appeals to biology are problematic, because most connectionist networks are actually not so biologically plausible (Bechtel and Abrahamsen 2002: 341–343; Bermúdez 2010: 237–239; Clark 2014: 87–89; Harnish 2002: 359–362). For example, real neurons are much more heterogeneous than the interchangeable nodes that figure in typical connectionist networks. It is far from clear how, if at all, properties of the interchangeable nodes map onto properties of real neurons. Especially problematic from a biological perspective is the backpropagation algorithm. The algorithm requires that weights between nodes can vary between excitatory and inhibitory, yet actual synapses cannot so vary (Crick and Asanuma 1986). Moreover, the algorithm assumes target outputs supplied exogenously by modelers who know the desired answer. In that sense, learning is supervised. Very little learning in actual biological systems involves anything resembling supervised training.

https://plato.stanford.edu/entries/computational-mind/#ArgFo...


> There are currently things a brain can think about that a Turing machine cannot...

Such as? Are these things forbidden by theory, or just simply beyond current engineering practice?


Richard Penrose has written extensively on this subject in his book "Shadows of the Mind".


> things a brain can think about that a Turing machine cannot

You say this as though we understand what "thinking" is. We do not.


I don't think the idea that the brain can do anything a sufficiently fast computer can't deserves any credence. Those who argue this point always descend into hand wavy nonsense.


If by computer, you mean a binary computer using transistors, there's no reason to think you can make it do anything a brain can do. Alternatively, if computer means any hypothetical hardware, it may very well end up looking like a brain, in which case you might just call it an artificial brain rather than a computer.


Why couldn't a traditional computer fast enough simulate the brain you imagine?


It depends what you my mean by "do anything".

If that includes "have a conscious experience," then you're the one who's going to have to descend into hand wavy nonsense to explain how that's possible. Unless you've solved the hard problem of consciousness.


It's not clear that a computer storing input data and then modifying future computations based on that data and storing the results of that computation and using it to modify further computations is at all remarkable to start with.

If there is nothing inherently remarkable beyond scale then no hand waving needed.


The description you doesn't addresses consciousness.

The remarkable aspect is that of conscious awareness. You (presumably) have an experience of the world, in a way that a computer does not. Paraphrasing Nagel, "there is something it is like to be you."

Most people don't think that this is true of an executing computer program, for example - it executes whatever its instructions are, and even a self-modifying program, as you described, doesn't change that.

There is no known way to write a computer program which has conscious awareness, and no plausible reason that scale should affect this. If you scale up a computer program, or a computational neural network, there's no reason to believe that it wouldn't just be a very big machine, blindly executing its instructions with no conscious awareness.

The proposed explanations that do exist are all nothing but handwaving at this point, hence my original comment. The burden of explanation here is on those who claim that the brain is nothing but a computing device, since our current models of computing devices can't explain consciousness.


And how about you make the case for silicon becoming self aware or having subjective experience without descending into hand wavy nonsense?

Primates, which are very similar to us, do not have a comparable subjective experience. We know that because we can communicate with them and they don't have that much to say.

There are also brains in other animals that share a lot of similarities and are larger than ours.

Yet no beings (that's were aware of) have a subjective experience as rich as humans or can self-referentially communicate about that experience.

But turn up the clock speed on my gaming rig far enough and I get to debate the meaning of existence with it?

You have to ignore so much that is obvious to come to a reductive, materialistic conclusion like that.

[Edit]

Not sure if it's worth adding this but I do think we can build a computer that passes the Turing test. I also believe (sooner rather than later) we'll have a Siri-like AI that will provide enough companionship that a relationship can be formed with it.

We could even teach that AI to discuss subjective experience in a believable way.


I'm not saying that your particular software running on your machine would produce a subjective experience. Quake at 3 billion frames per second doesn't equal self awareness.

I'm saying that your hypothetical gaming rig because it can simulate any type of classical computation including running a simulation of your brain and thus would be sufficient given a programmer.


>> The first argument emphasizes learning (Bechtel and Abrahamsen 2002: 51). A vast range of cognitive phenomena involve learning from experience. Many connectionist models are explicitly designed to model learning, through backpropagation or some other algorithm that modifies the weights between nodes. By contrast, connectionists often complain that there are no good classical models of learning [4.2 Arguments for connectionism].

There is no special need for a specific model of "learning" in a classical setting. Given an inference procedure, such as induction, adbuction or deduction, that can derive new facts and rules in a logical language from observations and a pre-existing theory (i.e. a pre-existing set of facts and rules), all it takes to "learn" is to store the newly derived facts and rules to a database.

I mean "learning" in the sense of Mitchell's definition of _machine_ learning, as (informally) the ability of a system to improve its performance from experience. In this sense, a system that starts with a database of logical facts and rules and adds new facts and rules derived from new observations is "learning".

You can find many examples of learning in a classical, logic setting in the early ('70s and '80s) machine learning literature, particularly with propositional logic learners such as decision list and decision tree learners, the most famous of which are J. Ross Quinlan's ID3 and C4.5 decision tree learners. The field of Inductive Logic Programming studies learning in First-Order Logic languages, especially logic programming languages such as Prolog and Answer Set Programming, and includes early systems such as Shapiro's Model Inference System, Quinlan's FOIL (First-Order Inductive Learner, essentialy a relational version of ID3), Muggleton's Progol and Srinivasan's Aleph (based on inverse entailment), and more recently ASP learners such as ASPAL (Mark Law), or Statistical Relational Learning techniques, e.g. by De Raedt, Kerstig, Getoor, Taschar and others; etc etc.

Bottom line- there is a huge body of work on learning in a classical, logic setting. There is no serious objection that "there _are_ good classical models of learning". Such models are all over the place in machine learning. In fact, they tend to be the most carefully characterised models of machine learning.


For anyone here who thinks consciousness and all thinking aren't an emergent property of "computation" in the mind, what do you think would fail to happen if you simulated it in silicon, trying to emulate it exactly?


How can such a theory be falsified?


Simple: Find something that the brain does that could not, in principle, be emulated by a Turing machine or equivalent. So far we don't know of any such thing (since quantum mechanics is computable and everything including the brain is ultimately quantum mechanics).


Restricting brain activity to computation it becomes more difficult to find exceptions to the argument. I think this is because computation is fundamentally quite alien to human minds, insofar as it is not an elementary, inescapable mental state. Many philosophers of mind (though not all, particularly not those who adhere to reductive physicalism and eliminative materialism) would characterise phenomenal consciousness and intentionality as these unavoidable mental properties.

It is worth mentioning that there are at least good reasons to reject nearly all of these theories (including physicalist and material thesis). Many aspects of each theories turn out to imply highly unintuitive effects. But I would recommend you read for yourself, since the mind-body problem is immense and very interesting at every step. The SEP articles on both of these[1][2] are quite good.

Elsewhere in this thread some have pointed out that the ontological assumptions of philosophers fade away as we approach the mind in our inquiry. This is at least partly because the mind stretches our understanding of knowledge and matter themselves, and to an even greater degree, our intuitions thereof.

[1] https://plato.stanford.edu/entries/intentionality/

[2] https://plato.stanford.edu/entries/consciousness/; see especially §4.


True random number generation can't be done by a deterministic computer, and it appears that human brains can, although not conclusive yet, and the precise mechanism is unclear:

- https://www.ncbi.nlm.nih.gov/pubmed/15922090


I thought humans were pretty widely considered to be terrible random number generators. We tend to produce sequences that are too uniform, not streaky enough, and contain patterns.

The first related article linked from your link was "Humans cannot consciously generate random numbers sequences: Polemic study." (https://www.ncbi.nlm.nih.gov/pubmed/17888582)


Turing machines don't have to be deterministic. You can have non deterministic Turing machines, and probabilistic Turing machines.


Turing machines, without qualifiers, usually means deterministic ones. Extending it for probabilistic is quite ok, but non deterministic ones are a completely different beast.


"True random number" is a set of statistical tests, which means they can be deterministically fooled. Humans have a lot more inputs than your typical computer, so it's not surprising that their outputs can seem random.


Why are you limiting it to deterministic computers? There are electronic circuits that can provide genuine random numbers.


From the linked article's context:

> 3. The classical computational theory of mind

> The label classical computational theory of mind (which we will abbreviate as CCTM) is now fairly standard.

> Turing computation is deterministic: total computational state determines subsequent computational state.


If there is nothing that is uncomputable, then doesn't that mean the hypothesis is unfalsifiable?


It's worth noting that the theory that a falsifiable hypothesis constitutes scientific investigation (as proposed by Popper) isn't really the gold standard of the philosophy of science any more - especially since Popper's formulation is known to be pretty shoddy.


That sounds like a recipe for making up stuff and calling it 'science'.


Not at all, just that the pure theory of falsificationism as specified by Popper both excludes valid science, and includes pseudoscience (for instance, astrology is falsifiable). Thus, falsifiablity is not both necessary and sufficient[0].

[0] https://plato.stanford.edu/entries/pseudo-science/


The core tenet of science is testing of hypotheses. Any hypothesis that's not falsifiable is by definition not scientifically testable.

Astrology itself is not scientific but, if you use a version of it that's falsifiable, you absolutely can do a valid scientific study on it by making astrological predictions and then testing to see whether they come true. I think it's pretty widely agreed that we've already done this and the (valid scientific) result was negative.


>Any hypothesis that's not falsifiable is by definition not scientifically testable.

This is beside the point. The point of contention is whether or not falsifiability is what makes science what it is, or should be. Genuine science (such as exploratory papers) very often does not start by specifying a falsifiable hypothesis. Bad science, such as astrology, does often propose falsifiable hypotheses. Therefore, astrology can be falsifiable. Therefore, according to Popperian demarcation, astrology counts as science, or it's scientific (useful to remember that Popper counted Darwinian evolution as non-science).

Falsifiability isn't enough for something to be science; it's not necessary and sufficient - because otherwise astrology is science, and exploratory research, popular in many scientific fields, isn't science. The fact that astrology's claims have been falsified does not discount it as science, since a great number of genuine scientific papers also successfully falsify their hypotheses - finding a null result is an example of falsifying a hypothesis.


> Genuine science (such as exploratory papers) very often does not start by specifying a falsifiable hypothesis.

There is plenty of useful work which doesn't specify a falsifiable hypothesis, but it's not science until it does so.

> Therefore, astrology can be falsifiable. Therefore, according to Popperian demarcation, astrology counts as science, or it's scientific

No. Again, being falsifiable means that astrology can be a subject of scientific study. It doesn't make it science in and of itself.

Science is work that follows the scientific process: Choose a question to answer, formulate a hypothesis, make testable predictions based on the hypothesis, test the predictions, analyze and report the results. We can come with a new term (maybe 'pondering'?) for trying to answer questions without testing hypotheses, but by definition it won't be science.


>There is plenty of useful work which doesn't specify a falsifiable hypothesis, but it's not science until it does so.

That's quite a bold statement which is not supported by current work in the philosophy of science. Would you be willing to claim that most papers submitted to Nature don't count as science?

>No. Again, being falsifiable means that astrology can be a subject of scientific study. It doesn't make it science in and of itself.

The claim was that falsifiability is necessary and sufficient to count as science - so really we're in agreement. Making falsifiable claims is not necessary and sufficient demarcation of science and pseudo-science. You need something more than falsifiability to distinguish science from pseudo-science. The question is: what is that thing?

>but by definition it won't be science.

By whose definition? You're sending mixed messages - why is physics a science, rather than merely capable of being a subject of scientific study? We can make claims in physics that are just as falsifiable as the ones in astrology.

It's also unwise to paint an idealistic vision of science (falsificationism) in contrast to how it's actually practiced; from SEP:

>Popper’s focus on falsifications of theories led to a concentration on the rather rare instances when a whole theory is at stake. According to Kuhn, the way in which science works on such occasions cannot be used to characterize the entire scientific enterprise. Instead it is in “normal science”, the science that takes place between the unusual moments of scientific revolutions, that we find the characteristics by which science can be distinguished from other activities.


So if it is not possible for the computable mind theory to be false, what is its scientific value?


I don't know about this specific case; I was only taking issue with the idea of pure falsificationism to distinguish science from pseudo-science. There are other demarcations (listed in the SEP article linked a few comments ago) which may also list the computational theory of mind as pseudo-scientific, but not just because it's unfalsifiable (if it really is).


I just use falsifiability as a necessary condition, not a sufficient condition.


> So far we don't know of any such thing

That's not entirely correct. We don't know that a Turing machine can have conscious experience. There's no substantiated "in principle" explanation for that.


The definition of "a conscious experience" is so vague and circular that I'm not sure the question of "whether an X can have conscious experience" (where X is computer, dog, fish, human infant...) is even all that meaningful.

Either way we certainly don't have any indication that it is impossible in principle for a Turing machine to perform the same kind of calculation that gives rise to conscious experience in humans, or that this calculation would for some (supernatural?) reason not have the same outcome if performed by something other than a human brain.


> I'm not sure the question of "whether an X can have conscious experience" (where X is computer, dog, fish, human infant...) is even all that meaningful.

I do know that I have a conscious experience, so that's quite meaningful - to me. I cannot check that any other being has similar feelings, so the doubtful question would be whether "an X other than me can have conscious experience"; but people with good manners make the polite assumption that it's also true for other similar beings.


You may have the illusion of a "conscious experience" that is in fact a story told to yourself about how a thing you call "you" is in charge of your thoughts.


That doesn't really work. What is "yourself" in that statement, other than a conscious entity?

How would you write a computer program that "tells that story to itself," such that it actually has an experience of the world, as opposed to just being a machine executing a program without any conscious awareness?

Edit: also, whether we're in charge of our thoughts is a separate question from whether we possess consciousness. Even if we're not in charge of our thoughts, we still have a conscious experience of them.


I've never heard a complete and convincing explanation for what "yourself" could be, but meditating on the extreme unintuitiveness of self-reference and recursion (a la Douglas Hofstadter's I am a Strange Loop) increases my expectation that a computational explanation is coming, eventually.


I think that's pretty wishful thinking. It's not like we don't have a lot of experience with self-reference and recursion in computational systems. In fact this site is named after that. I don't think the Y combinator is conscious.


I also hear sounds, see colours, feel pain. These are qualia don't exist outside minds, and are not thoughts but experiences.


What is an illusion without consciousness?


The conscious experience might be there but at the same time it could be an entirely deterministic thing. Maybe you and I having this exchange was determined in the instant of the big bang.


Being deterministic doesn't make it any less conscious. We're talking perception here, not free will.

In fact, there's pretty good evidence that what we call consciousness is a post-facto rationalization of the subconscious brain processes that determine an automatic answer of your brain to stimulus (not that it makes them deterministic, but certainly they're not "rational" in the classic sense).


> What is an illusion without consciousness?

A perception that, divorced from all other facts, entails a false conclusion.

So your perception of conscious subjectivity could indeed be an illusion.


"Your perception of conscious subjectivity" implies consciousness.

Put another way, how would you program a computer to have a perception of conscious subjectivity, as opposed to just blindly and unconsciously executing its instructions?


> "Your perception of conscious subjectivity" implies consciousness.

No it doesn't! Assuming by "consciousness", you mean a phenomenon that's not reducible to unconscious particle interactions, which is typically what is meant in philosophical discussions of this topic.

We have some mechanistic theories for consciousness [1]. It basically amounts to the same sort of illusion that your single core CPU uses to achieve the illusion of parallelism, ie. context switching between internal and external mental models produces the illusion of consciousness.

[1] https://www.frontiersin.org/articles/10.3389/fpsyg.2015.0050...


> Assuming by "consciousness", you mean a phenomenon that's not reducible to unconscious particle interactions

I'd say that's an unfounded assumption, which doesn't come up in the argument you're responding to - even if it's somewhat 'popular' elsewhere.

The argument made is that consciousness is (or includes) a form of perception; not that this perception is independent of mechanistic components. With this definition, you assertion that 'conscious subjectivity is an illusion' is inconsistent, as an illusion is a complex form of perception that requires a consciousness to perceive it.

Following your CPU example, there is parallelism from the point of view of the program being executed, even if it's simulated from a single-core mechanical basis (threads and context-switching).


> I'd say that's an unfounded assumption, which doesn't come up in the argument you're responding to - even if it's somewhat 'popular' elsewhere.

It's not really. Consciousness quite literally does not exist in mechanistic/eliminativist conceptions of consciousness like the link I provided, just like cars don't really exist because they aren't in the ontology of physics. My clarification of "assumption" is simply because many people don't know this.

> Following your CPU example, there is parallelism from the point of view of the program being executed, even if it's simulated from a single-core mechanical basis (threads and context-switching).

No, there is concurrency but not parallelism.


> just like cars don't really exist because they aren't in the ontology of physics

If I understand you correctly, that's a pretty harsh criterion for existence, isn't it? Even though a car is just a composite of metal atoms under a precise configuration and not a metaphysical entity on itself, you can still use it to drive you home. I suppose that makes me an utilitarian.

> No, there is concurrency but not parallelism.

You're right, my bad. I've forgotten my precision from my college days. Still, that's good enough for the program, just like my consciousness is good enough for me, even if it's entirely mechanistic and doesn't exist in the same way that cars don't exist.


Why is it polite to assume non concious entities are conscious?


How do you know they are non-conscious?

If it looks like a duck, swims like a duck, and quacks like a duck...

For example, I'm assuming you're conscious, because you posted a reply that was on-topic and coherent with the conversation above it.


Yes, but we don't know how to check that an entity has a conscious experience, so we can not falsify that a Turing machine has one, either. That's the whole reason for the Turing Test, btw.


Yes, we can't falsify that a Turing machine has conscious experience, but we have no reason or explanation to suggest that it does have one.

To come back to the original claim:

> Simple: Find something that the brain does that could not, in principle, be emulated by a Turing machine or equivalent. So far we don't know of any such thing

We may not "know" of such a thing with certainty, but we have a strong candidate in consciousness.

There are two possibilities here:

One is that Turing machines are conscious (and we're monsters for what we do with them), in which case we still have an unexplained panpsychic phenomenon which we would need new science to understand.

The other is that Turing machines are not conscious, in which case there's an unexplained phenomenon in how an object like the brain can give rise to consciousness. In that case, the question of whether a Turing machine could in principle emulate consciousness depends on what the cause of consciousness is. It's certainly possible, and doesn't even seem particularly unlikely, that we find that Turing machines cannot do this, and that something other than "computation" is needed.


> Yes, we can't falsify that a Turing machine has conscious experience, but we have no reason or explanation to suggest that it does have one.

I don't have no reason or explanation that suggests that you have a consciousness either. You could be a very elaborate chatbot that posts coherent replies at online forums. Also, I do know whether I'm a chatbot myself or not, but you can't tell about me just from the replies written here.

> We may not "know" of such a thing with certainty, but we have a strong candidate in consciousness.

The problem with that is, you don't have a test for consciousness. There's a strong candidate in MRI brain scans (at least for humans), but you can't really be sure.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: