The Posthuman Condition

November 14, 2008

In the essay The Posthuman Condition, by Kip Werking, Oxford philosopher Nick Bostrom is quoted:

at least one of the following propositions is true: (1) the human species is very likely to go extinct before reaching a “posthuman” stage; (2) any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof); (3) we are almost certainly living in a computer simulation. It follows that the belief that there is a significant chance that we will one day become posthumans who run ancestor-simulations is false, unless we are currently living in a simulation.

This idea is a lot of fun.  I accept the truth of the conjunction, but reject the (implicit) implication that it may be likely we are simulations of essentially, our posthuman descendents.  My money is on (2) but to cover all bases, I would have daily doubles on (2),(3); (2),(1); and a triple safety bet on (1),(2),(3).  I think (1) is false, but I believe I can defend (2) and so if (3) is to be true it is not our posthuman ancestors who are administering the simulation.  Therefore (1) might as well be true: some programs just self-destruct.  

I like this idea of multiple levels of simulation, very much. My problem with (2) is that it does not allow for necessary hierarchies of complexity in the levels of simulations.  Let the administrators of this rat maze we call the universe be called L^1, where we are L^0.   Suppose L^1‘s universe is not deterministic, in the sense that its nature forever remains a mystery to them.  Then by what we understand about chaotic behavior, it seems unlikely that any simulation (leading to L^0 and beyond) will parallel the evolution of L^1.  Perhaps such a scenario still qualifies as an “ancestor-simulation,” however different the initial conditions and rules of evolution. But the time scale which separates L^1 from their ancestors who may have resembled L^0 is vast. It seems impossible that an approximate system would parallel L^1‘s universe enough to qualify as “their own evolutionary history.”

 

On the other hand, suppose L^1 lives in a deterministic universe, meaning a set of rules can be found, and from these the precise nature of the universe is determined.*  Suppose further that this determinism is discrete in space and time and finite in rules of evolution.  With such strong hypotheses surely we are capable of allowing miniature accurate simulations of the universe, and in fact simulations within simulations.  Yet even here we have a problem of resources.  The universe cannot be embedded as a proper subset of itself, let alone run, as a simulation, at twice or ten times the speed of the ambient duplicate.  If it could be embedded as a proper subset of itself than an infinite regression would be necessary, which presupposes self-similarity and precludes a discrete universe.  Maybe this is okay.  Maybe L^1 will get sufficient information by looking at some proper subset of the universe.  Still, with all the quantum computing L^1 may have at its disposal, it cannot compress the universe, since it would have to compress the behavior at the quantum level, too, et cetera.  So in this case the smallest a computer would have to be would be several times as large as our solar system, in which case a lot of tricks would have to be used to seal off the outside universe (such as visual simulation of distant galaxies).  The engineering that would allow such a structure is nearly unfathomable, but even permitting such a computer, what hope is there for the existence of L^2 or L^{-1}?  So the fantastic idea of many levels of simulation relies on the levels being qualitatively distinct.  

 

Of course, we made the assumption that the universe is discrete to be generous to the possibility of running faithful simulations.  In the end it was used in our argument that such simulations can hardly be faithful.  Let’s suppose that the universe is not discrete.  Then, for example, it may be that the natural laws repeat themselves in self-similar ways, all the way down.  In these cases it may be possible to embed a faithful model of the universe as a proper subset of itself, but there will always be the problem of construction and of setting initial conditions.  How does one construct and program a computer that faithfully simulates a universe with an infinite regression of physical states and laws?  Only very roughly, and that with exceptionally fine tools.  In conclusion, I cannot argue against the possibility of universes within universes, and simulations within simulations, but in these cases the different levels of simulation are qualitatively distinct, and therefore it should not be possible for a posthuman species to run simulations of earlier stages of its species with any sort of accuracy. 

 

*[We might call this weak determinism, as it does not necessarily follow that states can be predicted before they occur.  As far as the distinction that there is but one future, I don’t believe this definition is well-defined, since in any universe a hypothetical oracle, (e.g., future us), there is tautologically but one future.]