I think for my purposes defining continuous = unmeasurably discrete produces the same results.
Ie., there is an irreducible geometrical continuity in the sense that no discontinuity can ever appear. The state density is maximal.
via this route we reporduce the same point: computationalism/simulation'ism' is then just the thesis that computers qua measurably discrete systems can realise dense unmeasurable discrete systems.
This can be shown to be impossible with much the same argument: spatial and temporal geometrical properties obtain in virtue of dense discreteness; and fail to obtain at measureable levels.
The key property of continuity is its irreducibility to measurably discrete systems. That irreducibility isn't, however , limited to continuity .
Wolfram makes this point about the failures of reductionism in a perfectly discrete context, ie., that no CA can compute a CA whose complexity is greater than it can summarise.
I prefer to press a continuous angle: our best theories of all of reality are continuous and geometrical . That energy levels are discrete in bounded quantum systems has almost nothing to do with the indispensability of contintuous mathemtatjcs in every known physical system -- including that very bounded wavefn
I agree that we can’t reject the hypothesis that reality is continuous (and, even if spacetime turns out to be discrete, it seems hard to imagine that the amplitudes of the wavefunction(s) have finitely many possible complex values, though I suppose we can’t rule it out)
I disagree that this necessarily implies any difficulty for the possibility of brain scanning and AI.
Just as things sampled faster than the nyquist frequency (or twice it or whatever) of a uh, band limited thing, can be perfectly recovered, (I mean there’s still discritization of the amplitudes but I hear this can also be handled), I don’t see why uh, arbitrarily high frequency (in space and time) should be necessary in order to model the behavior of a brain to the point of long-term indistinguishability.
(That being said, I don’t particularly expect whole brain emulation to ever be achieved, I just don’t see “spacetime is continuous (or well approximated as continuous)” as being a strong argument for it being impossible.)
I’m not sure what you mean by computationalism.
If you mean the idea that the way the world works is computable in the abstract sense (not requiring any practical bounds on the computational resources needed), then the idea that the world is discrete and finite, merely with extremely fine grains, then this poses no issue for computation in that abstract sense (just make the imaginary computer even bigger).
If you mean like, an accurate simulation of the past of the world being run within the world, yeah that doesn’t work.
Simulation doesn't mean exact simulation. The value of a continuous property is noise after a certain decimal and can be disregarded for the purposes of simulation. I also believe that 1) is false.
The issue is the word "computer" means not "device we have made" but "universal turing machine".
Ie., a computer is any system which realises a function from the Naturals to the Naturals.
Physics barely, if at all, has any use whatsoever for those functions. It is a very important point.
Computer scientists (ie., discrete mathematicians) are not the people who are even able to describe, engineer and build whatever is needed for an AI -- if, as I claim, continuous dynamical properties are needed.
(As, for example, they are needed by pretty much every system.)
Computer scientists do study computers that use real numbers, for example it's known that such computers can solve all problems in #P in polynomial time. Many other areas of computer science also use plenty of real analysis.
Whatever the length of the smallest discrete unit of spacetime, say, L -- then continuous spatial and temporal properties are those which have a state density O(1/L^2).
This may seem weird, and indeed, it's far less weird if you just say "continuous".
But here's an intuition: spatio-temporal continuity is "scale-free" in the sense that stuff happening at the sub-proton is affected by stuff happening at the galactic.
Thus reality has to be able to "zoom" from the sub-proton to the galactic.
In the case of organic plasticity, I do think that macroscopic effects which are whole-body distributed (including, eg., thoughts) have to drive protein expression at the sub-sub-celluar.
Consider simulating that with a low-state discrete computer: it is many orders of magnitude more data than a planet-sized computer could store and many more years than the lifetime of the universe (consider the number of molecules to store, and their interaction effects from whole-body down).
Running operations at anything in the nanoseconds makes this simulation impossible. It simply does need to be much closer to O(1/L^2).
Ie., there is an irreducible geometrical continuity in the sense that no discontinuity can ever appear. The state density is maximal.
via this route we reporduce the same point: computationalism/simulation'ism' is then just the thesis that computers qua measurably discrete systems can realise dense unmeasurable discrete systems.
This can be shown to be impossible with much the same argument: spatial and temporal geometrical properties obtain in virtue of dense discreteness; and fail to obtain at measureable levels.
The key property of continuity is its irreducibility to measurably discrete systems. That irreducibility isn't, however , limited to continuity .
Wolfram makes this point about the failures of reductionism in a perfectly discrete context, ie., that no CA can compute a CA whose complexity is greater than it can summarise.
I prefer to press a continuous angle: our best theories of all of reality are continuous and geometrical . That energy levels are discrete in bounded quantum systems has almost nothing to do with the indispensability of contintuous mathemtatjcs in every known physical system -- including that very bounded wavefn