We understand exactly how it works. It just works in such a way that we cannot predict the outcome, which makes it pretty bad for many applications. If you can't explain how it works doesn't mean it's not understood how it works.
We know how LLM work. Parameters. Training data. Random number generator. That stuff.
We don't know why it outputs what it outputs because rngs are notoriously unpredictable and we know it. So we are surprised but that in itself is unsurprising.