What makes you think autocomplete on steroids with no internal reasoning capability or representation of knowledge would be at all useful for inferring intent? Or have you been fooled into thinking an LLM is something like Eurisko/Cyc?
Personally I'm with the parent poster - I use LLMs to help me with intent in new codebases I don't understand yet all the time, and empirically they seem to understand it pretty well. Useful, especially when you don't have good documentation on hand.
Clear variable names and comments aren't a requirement at all.
It sounds to me like you have a philosophical problem with LLMs, which is something I don't think we can debate in good faith. I can just share my experience, which is that they are excellent tools for this kind of thing. Obvious caveats apply - only a fool would rely entirely on an LLM without giving it any thought of their own.
I don’t have a philosophical problem with LLMs. I have a problem with treating LLMs as something other than what they are: Predictive text generators. There’s no understanding beneath that which informs the generation, just compression techniques that arise as part of training. Thus I wouldn’t trust them for anything except churning out plausibly-structured text.
Because ideally good code should make the intent obvious from the names and comments, so inferring a full description should really just be an autocomplete task.
The topic is using an LLM to learn a codebase one doesn’t understand. Does that sound like a codebase that has names and comments from which a full description could be inferred?