Hi folks.
This is the third part in an ongoing theory I’ve been developing over the last few years called the Infinite Choice Barrier (ICB). The core idea is simple:
General intelligence—especially AGI—is structurally impossible under certain epistemic conditions.
Not morally, not practically. Mathematically.
The argument splits across three barriers:
1.Computability (Gödel, Turing, Rice): You can’t decide what your system can’t see.
2.Entropy (Shannon): Beyond a certain point, signal breaks down structurally.
3.Complexity (Kolmogorov, Chaitin): Most real-world problems are fundamentally incompressible.
This paper focuses on (3): Kolmogorov Complexity.
It argues that most of what humans care about is not just hard to model, but formally unmodellable—because the shortest description of a problem is the problem.
In other words: you can’t generalize from what can’t be compressed.
⸻
Here’s the abstract:
There is a common misconception that artificial general intelligence (AGI) will emerge through scale, memory, or recursive optimization. This paper argues the opposite: that as systems scale, they approach the structural limit of generalization itself.
Using Kolmogorov complexity, we show that many real-world problems—particularly those involving social meaning, context divergence, and semantic volatility—are formally incompressible and thus unlearnable by any finite algorithm.
This is not a performance issue. It’s a mathematical wall. And it doesn’t care how many tokens you’ve got
The paper isn’t light, but it’s precise.
If you’re into limits, structures, and why most intelligence happens outside of optimization, it might be worth your time.
https://philpapers.org/archive/SCHAII-18.pdf
Happy to read your view.
Unless you believe in magic, the human brain proves that human level general intelligence is possible in our physical universe, running on a system based on the laws of said physical universe. Given that, there's no particular reason to think that "what the brain does" OR a reasonably close approximation, can't be done on another "system based on the laws of our physical universe."
Also, Marcus Hutter already proved that AIXI[1] is a universal intelligence, where it's only short-coming is that it requires infinite compute. But the quest of the AGI project is not "universal intelligence" but simply intelligence that approximates that of us humans. So I'd count AIXI as another bit of suggestive evidence that AGI is possible.
Using Kolmogorov complexity, we show that many real-world problems—particularly those involving social meaning, context divergence, and semantic volatility—are formally incompressible and thus unlearnable by any finite algorithm.
So you're saying the human brain can do something infinite then?
Still, happy to give the paper a read... eventually. Unfortunately the "papers to read" pile just keeps getting taller and taller. :-(
[1]: https://en.wikipedia.org/wiki/AIXI