Anyone who dismisses your assertion is not very curious. What I am more interested in is what are its limits and can it perform novel reasoning. It probably needs efficient enough novel reasoning to update itself with new information to become a general reasoning intelligence capable of solving unknown problems. Right now they operate purely in the domain of words. They solve problems with words. They don’t seem to have very complex semantic maps. They approximate semantic maps with statistical brute force by generating words. They have a model of the past to generate the words. When something matches the word map is easy. When something is not reducible or did not have a good word match the only thing it can do is experimentally generate words until it seems to match the problem. But it is brute force. It is good they can solve known problems that fit known problem shapes. But their language dependency makes this very fragile. Without semantic meaning it has no way to evaluate if it is hallucinating easiy.