Your third paragraph describes language pretty well (although I'd quibble with formal grammars only being 'vague' in their coverage - I think they do a pretty good job although I agree they can never be perfect). And I appreciate the achievement of LLMs in being able to take in language prompts and return useful responses. So it's an achievement that is useful certainly in providing a 'natural language' querying and information collating tool. (I agree here with your second paragraph.)
But it remains a tool and a derivative one. You will see people in these recent HN threads making grandiose claims about LLMs 'reasoning' and 'innovating' and 'trying new things' (I replied negatively under a comment just like this in this thread). LLMs can't and will never be able to do these things because, as I've already said, they are completely derivative. They may, by collating information and presenting it to the user, provoke new insights in the human user's mind. But they won't be forming any new insights themselves, because they are machines and machines are not alive, they are not intelligent, and they cannot think or reason (even if a machine model can 'learn').
> LLMs can't and will never be able to do these things because, as I've already said, they are completely derivative.
I agree, they are completely derivative. And so are you and I. We have copied everything we know, either from other humans or from whatever we have learned from our simple senses.
I'm not asking you to bet that LLMs will do any of those things really, I suppose it's not a guarantee that anything will improve to a certain point. But I am cautioning not to bet heavily against it because, after witnessing what this generation of LLM is capable of, I no longer believe there's anything fundamentally different about human brains, so, to me, it's like asking if an x86-64 PC will ever be able to emulate a PS5. Maybe not today, but I don't see any reason why a typical PC in 10 or 15 years would have trouble.
Well... complaining about people online or in the media making grandiose is like fighting wind.
I totally see your point about inherent "derivativeness" of LLMs. This is true.
But note how "being alive" or "being intelligent" or "be able to think" are hard to define. I'd work for the "duck test" approach: if it is not possible to distinguish a simulation from the original then it doesn't make sense to draw a line.
Anyways, yes, LLMs are boring. I am just not sure we people are not boring as well.
But it remains a tool and a derivative one. You will see people in these recent HN threads making grandiose claims about LLMs 'reasoning' and 'innovating' and 'trying new things' (I replied negatively under a comment just like this in this thread). LLMs can't and will never be able to do these things because, as I've already said, they are completely derivative. They may, by collating information and presenting it to the user, provoke new insights in the human user's mind. But they won't be forming any new insights themselves, because they are machines and machines are not alive, they are not intelligent, and they cannot think or reason (even if a machine model can 'learn').