next up previous
Next: Conclusion Up: The Speed Prior: A Previous: Rational Decision Makers Based


Physics

Note: this section can be skipped by readers who are interested in the theoretical framework only, not in its potential implications for the real world.

Virtual realities used for pilot training or video games are rapidly becoming more and more convincing, as each decade computers are getting roughly 1000 times faster per dollar -- a consequence of Moore's law first formulated in 1965. At the current pace, however, the Bremermann limit [2] of roughly $10^{51}$ operations per second, on not more than $10^{32}$ bits for the ``ultimate laptop'' [16] with 1 kg of mass and 1 liter of volume, will not be reachable within this century, and there is no obvious reason why Moore's law should break down any time soon. Thus a simple extrapolation has led many to predict that within a few decades computers will match brains in terms of raw computing power, and that soon there will be reasonably complex virtual worlds inhabited by reasonably complex virtual beings.

In the past decades numerous science fiction authors have anticipated this trend in novels about simulated humans living on sufficiently fast digital machines, e.g., [7]. But even serious and reputable computer pioneers have suggested that the universe essentially is just a computer. In particular, the ``inventor of the computer'' Konrad Zuse not only created the world's first binary machines in the 1930s, the first working programmable computer in 1941, and the first higher-level programming language around 1945, but also introduced the concept of Computing Space (Rechnender Raum), suggesting that all physical events are just results of calculations on a grid of numerous communicating processors [28]. Even earlier, Gottfried Wilhelm von Leibniz (who not only co-invented calculus but also built the first mechanical multiplier in 1670) caused a stir by claiming that everything is computable (compare C. Schmidhuber's concept of the mathscape [18]).

So it does not seem entirely ludicrous to study consequences of the idea that we are really living in a ``simulation,'' one that is real enough to make many of its ``inhabitants'' smile at the mere thought of being computed. In absence of contrarian evidence, let us assume for a moment that the physical world around us is indeed generated by a computational process, and that any possible sequence of observations is therefore computable in the limit [22]. For example, let $x$ be an infinite sequence of finite bitstrings $x^1,x^2,\ldots$ representing the history of some discrete universe, where $x^k$ represents the state of the universe at discrete time step $k$, and $x^1$ the ``Big Bang'' [20]. Suppose there is a finite algorithm $A$ that computes $x^{k+1}$ ($k
\geq 1$) from $x^{k}$ and additional information $noise^k$ (this may require numerous computational steps of $A$, that is, ``local'' time of the universe may run comparatively slowly). Assume that $noise^k$ is not truly random but calculated by invoking a finite pseudorandom generator subroutine. Then $x$ has a finite constructive description and is computable in the limit.

Contrary to a widely spread misunderstanding, quantum physics and Heisenberg's uncertainty principle do not rule out such pseudorandomness in the apparently random or noisy physical observations -- compare reference [26] by 't Hooft (physics Nobel prize 1999).

If our computability assumption holds then in general we cannot know which machine is used to compute the data. But it seems plausible to assume that it does suffer from a computational resource problem, that is, the a priori probability of investing resources into any computation tends to decrease with growing computational costs.

To evaluate the plausibility of this, consider that most data generated on your own computer are computable within a few microseconds, some take a few seconds, few take hours, very few take days, etc... Similarly, most files on your machine are small, few are large, very few are very large. Obviously, anybody wishing to become a ``God-like Great Programmer'' by programming and simulating universes [20] will have a strong built-in bias towards easily computable ones. This provokes the notion of a ``Frugal Creator'' (Leonid Levin, personal communication, 2001).

The reader will have noticed that this line of thought leads straight to the Speed Prior $S$ discussed in the previous sections. It may even lend some additional motivation to $S$.

S-based Predictions. Now we are ready for an extreme application. Assuming that the entire history of our universe is sampled from $S$ or a less dominant prior reflecting suboptimal computation of the history, we can immediately predict: 1. Our universe will not get many times older than it is now [22] -- the probability that its history will extend beyond the one computable in the current phase of FAST (that is, it will be prolongated into the next phase) is at most 50 %; infinite futures have measure zero. 2. Any apparent randomness in any physical observation must be due to some yet unknown but fast pseudo-random generator PRG [22] which we should try to discover. 2a. A re-examination of beta decay patterns may reveal that a very simple, fast, but maybe not quite trivial PRG is responsible for the apparently random decays of neutrons into protons, electrons and antineutrinos. 2b. Whenever there are several possible continuations of our universe corresponding to different Schrödinger wave function collapses -- compare Everett's widely accepted many worlds hypothesis [5] -- we should be more likely to end up in one computable by a short and fast algorithm. A re-examination of split experiment data involving entangled states such as the observations of spins of initially close but soon distant particles with correlated spins might reveil unexpected, nonobvious, nonlocal algorithmic regularity due to a fast PRG. 3. Large scale quantum computation [1] will not work well, essentially because it would require too many exponentially growing computational resources in interfering ``parallel universes'' [5].

Prediction 2 is verifiable but not necessarily falsifiable within a fixed time interval given in advance. Still, perhaps the main reason for the current absence of empirical evidence in this vein is that nobody has systematically looked for it yet.

The broader context. The concept of dominance is useful for predicting prediction quality. Let $x$ denote the history of a universe. If $x$ is sampled from an enumerable or recursive prior then $M$-based prediction will work well, since $M$ dominates all enumerable priors. If $x$ is sampled from an even more dominant cumulatively enumerable measure CEM [22] then we may use the fact that there is a universal CEM $\mu^E$ that dominates $M$ and all other CEMs [22,23]. Using Hutter's loss bounds [9] we obtain a good $\mu^E$-based predictor. Certain even more dominant priors [22,23] also allow for nonrecursive optimal predictions computable in the limit. The price to pay for recursive computability of $S$-based inference is the loss of dominance with respect to $M$, $\mu^E$, etc.

The computability assumptions embodied by the various priors mentioned above add predictive power to the anthropic principle (AP) [3] which essentially just says that the conditional probability of finding oneself in a universe compatible with one's existence will always remain 1 -- the AP by itself does not allow for any additional nontrivial predictions.


next up previous
Next: Conclusion Up: The Speed Prior: A Previous: Rational Decision Makers Based
Juergen Schmidhuber 2003-02-25

Back to Speed Prior page