I must use AI differently than most because I find it stimulates deep thinking (not necessarily productive). I don't ask for answers. I ask for constraints and invariants and test them dialecticaly. The power in LLM is in finding deep associations of pattern which the human mind can then validate. LLMs are best used in my opinion not as an oracle of truth or an assistant but as a fast collective mental latent space look up tool. If you have a concept or a specification you can use the LLM to find paths to develop it that you might not have been aware of. You get out what you put in and critical thinking is always key. I believe the secret power in LLMs lies not so much in the transformer model but in the meaning inherent in language. With the right language you can shape output to reveal structure you might not have realized otherwise. We are seeing this power even now in LLMs proving Erdos problems or problems in group theory. Yes the machine may struggle to count the 'r's in strawberry but it can discern abstract relations.
An interesting visual exercise to see latent information structure in language is to pixelize a large corpus as bit map by translating the characters to binary then run various transforms on it and what emerges is not a picture of random noise but a fractal like chaos of "worms" or "waves." This is what LLMs are navigating in their high dimensional latent space. Words are not just arbitrary symbols but objects on a connected graph.
An interesting visual exercise to see latent information structure in language is to pixelize a large corpus as bit map by translating the characters to binary then run various transforms on it and what emerges is not a picture of random noise but a fractal like chaos of "worms" or "waves." This is what LLMs are navigating in their high dimensional latent space. Words are not just arbitrary symbols but objects on a connected graph.