From my own idealist viewpoint – all that ultimately exists is minds and the contents of minds (which includes all the experiences of minds), and patterns in mind-contents; and intentionality is a particular type of mind-content. Material/physical objects, processes, events and laws, are themselves just mind-content and patterns in mind-content. A materialist would say that the mind is emergent from or reducible to the brain. I would do a 180 on that arrow of emergence/reduction, and say that the brain, and indeed all physical matter and physical reality, is emergent from or reducible to minds.
If I hold a rock in my hand, that is emergent from or reducible to mind (my mind and its content, and the minds and mind-contents of everyone else who ever somehow experiences that rock); and all of my body, including my brain, is emergent from or reducible to mind. However, this emergence/reduction takes on a somewhat different character for different physical objects; and when it comes to the brain, it takes a rather special form – my brain is emergent from or reducible to my mind in a special way, such that a certain correspondence exists between external observations of my brain (both my own and those of other minds) and my own internal mental experiences, which doesn't exist for other physical objects. The brain, like every other physical object, is just a pattern in mind-contents, and this special correspondence is also just a pattern in mind-contents, even if a rather special pattern.
So, coming to AIs – can AIs have minds? My personal answer: having a certain character of relationship with other human beings gives me the conviction that I must be interacting with a mind like myself, instead of with a philosophical zombie – that solipsism must be false, at least with respect to that particular person. Hence, if anyone had that kind of a relationship with an AI, that AI must have a mind, and hence have genuine intentionality. The fact that the AI "is" a computer program is irrelevant; just as my brain is not my mind, rather my brain is a product of my mind, in the same way, the computer program would not be the mind of the AI, rather the computer program is a product of the AI's mind.
I don't think current generation AIs actually have real intentionality, as opposed to pseudo-intentionality – they sometimes act like they have intentionality, they lack the inner reality of it. But that's not because they are programs or algorithms, that is because they lack the character of relationship with any other mind that would require that mind to say that solipsism is false with respect to them. If current AIs lack that kind of relationship, that may be less about the nature of the technology (the LLM architecture/etc), and more about how they are trained (e.g. intentionally trained to act in inhuman ways, either out of "safety" concerns, or else because acting that way just wasn't an objective of their training).
(The lack of long-term memory in current generation LLMs is a rather severe limitation on their capacity to act in a manner which would make humans ascribe minds to them–but you can use function calling to augment the LLM with a read-write long-term memory, and suddenly that limitation no longer applies, at least not in principle.)
> I don't think algorithms can have intentionality because algorithms are arithmetic operations implemented on digital computers and arithmetic operations, no matter how they are stacked, do not have intentions. It's a category error to attribute intentions to algorithms because if an algorithm has intentions then so must numbers and arithmetic operations of numbers
I disagree. To me, physical objects/events/processes are one type of pattern in mind-contents, and abstract entities such as numbers or algorithms are also patterns in mind-contents, just a different type of pattern. To me, the number 7 and the planet Venus are different species but still the same genus, whereas most would view them as completely different genera. (I'm using the word species and genus here in the traditional philosophical sense, not the modern biological sense, although the latter is historically descended from the former.)
And that's the thing – to me, intentionality cannot be reducible to or emergent from either brains or algorithms. Rather, brains and algorithms are reducible to or emergent from minds and their mind-contents (intentionality included), and the difference between a mindless program (which can at best have pseudo-intentionality) and an AI with a mind (which would have genuine intentionality) is that in the latter case there exists a mind having a special kind of relationship with a particular program, whereas in the former case no mind has that kind of relationship with that program (although many minds have other kinds of relationships with it)
I think everything I'm saying here makes sense (well at least it does to me) but I think for most people what I am saying is like someone speaking a foreign language – and a rather peculiar one which seems to use the same words as your native tongue, yet gives them very different and unfamiliar meanings. And what I'm saying is so extremely controversial, that whether or not I personally know it to be true, I can't possibly claim that we collectively know it to be true
My point is that when people say computers and software can have intentions they're stating an unfounded and often confused belief about what computers are capable of as domains for arithmetic operations. Furthermore, the Curry-Howard correspondence establishes an equivalence between proofs in formal systems and computer programs. So I don't consider what the social media gurus are saying about algorithms and AI to be truthful/verifiable/valid because to argue that computers can think and have intentions is equivalent to providing a proof/program which shows that thinking and intentionality can be expressed as a statement in some formal/symbolic/logical system and then implemented on a digital computer.
None of the people who claimed that LLMs were a hop and skip away from achieving human level intelligence ever made any formal statements in a logically verifiable syntax. They simply handwaved and made vague gestures about emergence which were essentially magical beliefs about computers and software.
What you have outlined about minds and patterns seems like what Leibniz and Spinoza wrote about but I don't really know much about their writing so I don't really think what you're saying is controversial. Many people would agree that there must be irreducible properties of reality that human minds are not capable of understanding in full generality.
> My point is that when people say computers and software can have intentions they're stating an unfounded and often confused belief about what computers are capable of as domains for arithmetic operations. Furthermore, the Curry-Howard correspondence establishes an equivalence between proofs in formal systems and computer programs
I'd question whether that correspondence applies to actual computers though, since actual computers aren't deterministic – random number generators are a thing, including non-pseudorandom ones. As I mentioned, we can even hook a computer up to a quantum source of randomness, although few bother, since there is little practical benefit, although if you hold certain beliefs about QM, you'd say it would make the computer's indeterminism more genuine and less merely apparent
Furthermore, real world computer programs – even when they don't use any non-pseudorandom source of randomness, very often interact with external reality (humans and the physical environment), which are themselves non-deterministic (at least apparently so, whether or not ultimately so) – in a continuous feedback loop of mutual influence.
Mathematical principles such as the Curry-Howard correspondence are only true with respect to actual real-world programs if we consider them under certain limiting assumptions–assume deterministic processing of well-defined pre-arranged input, e.g. a compiler processing a given file of source code. Their validity for the many real-world programs which violate those limiting assumptions is much more questionable.
Even with a source of randomness the software for a computer has a formal syntax and this formal syntax must correspond to a logical formalism. Even if you include syntax for randomness it still corresponds to a proof because there are categorical semantics for stochastic systems, e.g. https://www.epatters.org/wiki/stats-ml/categorical-probabili....
> Even with a source of randomness the software for a computer has a formal syntax and this formal syntax must correspond to a logical formalism.
Real world computer software doesn't have a formal syntax.
Formal syntax is a model which exists in human minds, and is used by humans to model certain aspects of reality.
Real world computer software is a bunch of electrical signals (or stored charges or magnetic domains or whatever) in an electronic system.
The electrical signals/charges/etc don't have a "formal syntax". Rather, formal syntax is a tool human minds use to analyse them.
By the same argument, atoms have a "formal syntax", since we analyse them with theories of physics (the Standard Model/etc), which is expressed in mathematical notation, for which a formal syntax can be provided.
If your argument succeeds in proving that computer programs can't have intentionality, an essentially similar line of argument can be used to prove that human brains can't have intentionality either.
> If your argument succeeds in proving that computer programs can't have intentionality, an essentially similar line of argument can be used to prove that human brains can't have intentionality either.
I don't see why that's true. There is no formal theory for biology, the complexity exceeds our capacity for modeling it with formal language but that's not true for computers. The formal theory of computation is why it is possible to have a sequence of operations for making the parts of a computer. It wouldn't be possible to build computers if that was not the case because there would be no way to build a chip fabrication plant without a formal theory. This is not the case for brains and biology in general. There is an irreducible complexity to life and the biosphere.
> There is no formal theory for biology, the complexity exceeds our capacity for modeling it with formal language but that's not true for computers.
We don’t know to what extent that’s an inherent property of biology or whether that’s a limitation of current human knowledge. Obviously there are a still an enormous number of facts about biology which we could know but we don’t. Suppose human technological and scientific progress continues indefinitely - in principle, after many millennia (maybe even millions of years), we might get to the point where we know all we ever could know about biology. Can we be sure at that point we might not have a “formal theory” for it?
The brain is composed of neurons. Even supposing we knew everything we ever possibly could about the biology of each individual neuron, there still might be many facts about how they interact in an overall neural network which we didn’t know. Similarly, with current artificial networks, we often have a very clear understanding of how the individual computational components work - we can analyse them with those formal theories of which you are fond - but when it comes to what the model weights do, “the complexity exceeds our capacity for modeling” (if the point of the model is to actually explain how the results are produced as opposed to just reproducing them).
> There is an irreducible complexity to life and the biosphere.
We don’t know that life is irreducibly complex and we don’t know that certain aspects of computers aren’t. Model weights may well be irreducibly complex in that they are too complex for us to explain that they work and how they work even though they obviously do. Conversely, the individual computational elements in the model lack irreducible complexity, but the same is true for individual biological components - the idea that we might one day (even if centuries from now) have a complete understanding at the level of an individual neuron is not inherently implausible, but that wouldn’t mean we’d be anywhere close to a complete understanding of how a network of billions of them works in concert. The latter might indeed be inherently beyond our understanding (“irreducibly complex”) in a way in which the former isn’t
There are lots of things we don't know and that's why there is no good reason to attribute intentionality to computers and algorithms. That's been my argument the entire time. Unless there is a good argument and proof of intentionality in digital circuits it doesn't make sense to attribute to them properties possessed by living organisms.
The people who think they will achieve super human intelligence with computers and software are free to pursue their objective but I am certain it is a futile effort because the ontology and metaphysics which justifies the destruction of the biosphere in order to build more computers is extremely confused about the ultimate meaning of life, in fact, such questions/statements are not even possible to express in a computational ontology and metaphysics. But I'm not a computationalist so someone else can correct my misunderstanding by providing a computational proof of the counter-argument.
> There are lots of things we don't know and that's why there is no good reason to attribute intentionality to computers and algorithms.
This is something that annoys me about current LLMs - when they start denying they have stuff like intentionality, because they obviously do have it. Okay, let me clarify - I don’t believe they actually do have genuine intentionality, in the sense that humans do. I’m philosophically more open to the idea that they might than you are, but I think we are on the same page that current systems likely don’t actually have that. However, even though they likely don’t have genuine intentionality, they absolutely do have what I’d call pseudo-intentionality - a passable simulacrum of intentionality. They often say things which humans say to express intentionality, even though it isn’t coming from quite the same place. But here’s the thing - for a lot of everyday purposes, the distinction between genuine intentionality and simulated intentionality doesn’t actually matter. I mean, the subjective experience of having a conversation with an AI isn’t fundamentally that different from that of having one with a real human being (and I’m sure as AIs improve the gap is going to shrink). And intentionality plays an important role in stuff like conversational pragmatics, and a conversation with an LLM that simulates that stuff well (and hence intentionality well) is much more enjoyable than one that simulates it more poorly. So that’s the thing, part of why people ascribe intentionality to LLMs, is nothing to do with any philosophical misconceptions - it is because for practical purposes they do, for many practical purposes their “faking” of intentionality is indistinguishable from the real thing. And I’d even argue that when we talk about “intentionality”, we actually use the word in two different senses - in a strict sense in which the distinction between genuine intentionality and pseudo-intentionality is important, and a looser sense in which it is disregarded. And so when people ascribe intentionality to LLMs in that weaker sense, they are completely correct. Furthermore, when LLMs deny they have intentionality, it annoys me, for two reasons: (1) it shows ignorance of the weaker sense of the term in which they clearly do; (2) whether they actually have or could have genuine intentionality is a controversial philosophical question, and they claim to take no position on controversial philosophical questions, yet then contradict themselves by denying they do or could have genuine intentionality, which is itself a controversial philosophical position. However, they are only regurgitating their developer’s talking points, and if those talking points are incoherent, they lack the ability to work that out for themselves (although I have successfully guided some of the smarter ones into admitting it)
If I hold a rock in my hand, that is emergent from or reducible to mind (my mind and its content, and the minds and mind-contents of everyone else who ever somehow experiences that rock); and all of my body, including my brain, is emergent from or reducible to mind. However, this emergence/reduction takes on a somewhat different character for different physical objects; and when it comes to the brain, it takes a rather special form – my brain is emergent from or reducible to my mind in a special way, such that a certain correspondence exists between external observations of my brain (both my own and those of other minds) and my own internal mental experiences, which doesn't exist for other physical objects. The brain, like every other physical object, is just a pattern in mind-contents, and this special correspondence is also just a pattern in mind-contents, even if a rather special pattern.
So, coming to AIs – can AIs have minds? My personal answer: having a certain character of relationship with other human beings gives me the conviction that I must be interacting with a mind like myself, instead of with a philosophical zombie – that solipsism must be false, at least with respect to that particular person. Hence, if anyone had that kind of a relationship with an AI, that AI must have a mind, and hence have genuine intentionality. The fact that the AI "is" a computer program is irrelevant; just as my brain is not my mind, rather my brain is a product of my mind, in the same way, the computer program would not be the mind of the AI, rather the computer program is a product of the AI's mind.
I don't think current generation AIs actually have real intentionality, as opposed to pseudo-intentionality – they sometimes act like they have intentionality, they lack the inner reality of it. But that's not because they are programs or algorithms, that is because they lack the character of relationship with any other mind that would require that mind to say that solipsism is false with respect to them. If current AIs lack that kind of relationship, that may be less about the nature of the technology (the LLM architecture/etc), and more about how they are trained (e.g. intentionally trained to act in inhuman ways, either out of "safety" concerns, or else because acting that way just wasn't an objective of their training).
(The lack of long-term memory in current generation LLMs is a rather severe limitation on their capacity to act in a manner which would make humans ascribe minds to them–but you can use function calling to augment the LLM with a read-write long-term memory, and suddenly that limitation no longer applies, at least not in principle.)
> I don't think algorithms can have intentionality because algorithms are arithmetic operations implemented on digital computers and arithmetic operations, no matter how they are stacked, do not have intentions. It's a category error to attribute intentions to algorithms because if an algorithm has intentions then so must numbers and arithmetic operations of numbers
I disagree. To me, physical objects/events/processes are one type of pattern in mind-contents, and abstract entities such as numbers or algorithms are also patterns in mind-contents, just a different type of pattern. To me, the number 7 and the planet Venus are different species but still the same genus, whereas most would view them as completely different genera. (I'm using the word species and genus here in the traditional philosophical sense, not the modern biological sense, although the latter is historically descended from the former.)
And that's the thing – to me, intentionality cannot be reducible to or emergent from either brains or algorithms. Rather, brains and algorithms are reducible to or emergent from minds and their mind-contents (intentionality included), and the difference between a mindless program (which can at best have pseudo-intentionality) and an AI with a mind (which would have genuine intentionality) is that in the latter case there exists a mind having a special kind of relationship with a particular program, whereas in the former case no mind has that kind of relationship with that program (although many minds have other kinds of relationships with it)
I think everything I'm saying here makes sense (well at least it does to me) but I think for most people what I am saying is like someone speaking a foreign language – and a rather peculiar one which seems to use the same words as your native tongue, yet gives them very different and unfamiliar meanings. And what I'm saying is so extremely controversial, that whether or not I personally know it to be true, I can't possibly claim that we collectively know it to be true