You’re right in the sense that the CCTM (classical computational theory of mind) isn’t a complete theory. There are lot of problems. Studying it is essentially memorizing all the problems and the problems with various proposed solutions.
As a theory, not only is it incomplete, it is only remotely plausible. He concedes that up front!
It’s like the old joke about capitalism - worst but for any other.
I would expect a response to give a better alternative, not to lay out admittedly deep problems with it.
That's an argument for a research paradigm, its not an argument for the truth.
If there are fatal problems with Theory-A, Theory-A isn't true. No matter how much Theory-A might help with some other problem you have.
The CCTM is the research paradigm for modern AI, parts of cognitive science, etc. -- and insofar as it provides a clear set of assumptions to arrive at useful models, so be it.
Alexa can turn the lights on, for sure. She may even be able to reason a little (if-then-else, etc.). I doubt she will ever know what a "light" is, or what she is doing when she turns it "on".
That would require Alexa to have lived a human life, and to have lived in a deep and complex social/physical environment. There is no "logic" which can specify such things in a limited set of propositions: the effect of the world on animals is not merely to add propositions to their "set of beliefs".
Rather, animals are first traumatised by the world: their emotions, physical memory, instincts, etc. are all unmindfully coerced by their environments. Only with a peculiar sort of frontal lobe are those things expressible as propositions -- but they arent propositions, as evidenced by the infinite number of them required to capture the effects.
What we need before understanding inference, is to understand on what inference operates: the mental life created by the effect of the world on the whole mind of the animal.
> I doubt she will ever know what a "light" is, or what she is doing when she turns it "on".
You mean Alexa may never know what we mean by "light" or "turning it on". Neither would an intelligent alien that doesn't rely on sight. That doesn't entail that such a creature isn't intelligent, or doesn't have a mental life, or its operations doesn't operate on a model consisting of a set of propositions.
> There is no "logic" which can specify such things in a limited set of propositions: the effect of the world on animals is not merely to add propositions to their "set of beliefs".
That's conjecture, although I think the way you've framed it is misleading. Instincts are also "beliefs" in this model, and the operation of a mind can have multiple layers with inconsistent sets of "beliefs" that sometimes drive seemingly inconsistent behaviour.
I disagree that a problem with theory-A means theory-A is not "true." Our favorite theories: say relativity or QM have plenty of problems but we still work on them.
But your argument with Alexa is in my view in the wrong "direction." Alexa doesn't know what it means to be a light, and perhaps a computer "never" will.
But the real question is, how do humans "know what a light is", or what do you mean when you say a human knows what a light is.
My intuition is similar to yours, that our living in a "deep and complex" environment has something to do with it, but what?
The deep and complex environment might explain how we learn what a light is, but what is it? To put in Fodor's terms, what is the representation?
When you or I "think" (what is thinking?) of the light (what does it mean to think of the light?), what is going on in our heads?
I suspect whatever theory of representation you come up with will look something like a computational theory. The notion or concept of the light will be "stored" and have "relationships" and so forth.
> I disagree that a problem with theory-A means theory-A is not "true." Our favorite theories: say relativity or QM have plenty of problems but we still work on them.
I think you're actually agreeing here with mjburgess' point that therefore theory-A should be taken as the basis for a research paradigm.
As a theory, not only is it incomplete, it is only remotely plausible. He concedes that up front!
It’s like the old joke about capitalism - worst but for any other.
I would expect a response to give a better alternative, not to lay out admittedly deep problems with it.