Words are not reality, they are just data serialized from human world experience, without reference to the underlying meaning of those words. An LLM is unable to build the conceptual space-time model that the words reference, thus it has no understanding whatsoever of the meaning of those words. The evidence for this is everywhere in the "hallucinations" of LLM. It just statistics on words, and that gets you nowhere to understanding the meaning of words, that is conceptual awareness of matter through space-time.
This is a reverse anthropic fallacy. It may be true of a base model (though it probably isn't), but it isn't true of a production LLM system, because the LLM companies have evals and testing systems and such things, so they don't release models that clearly fail to understand things.
You're basically saying that no computer program can work, because if you randomly generate a computer program then most of them don't work.
Not at all. I'm saying there is a difference between statistics about word data and working with space-time data and concepts that classify space-time. We do the latter https://graphmetrix.com/trinpod-server