next up previous
Next: Optimal Rational Decision Makers Up: The New AI: General Previous: Computable Predictions through the


Speed Prior-Based Predictions for Our Universe

``In the beginning was the code.''

FIRST SENTENCE OF THE GREAT PROGRAMMER'S BIBLE

Physicists and economists and other inductive scientists make predictions based on observations. Astonishingly, however, few physicists are aware of the theory of optimal inductive inference [62,28]. In fact, when talking about the very nature of their inductive business, many physicists cite rather vague concepts such as Popper's falsifiability [39], instead of referring to quantitative results.

All widely accepted physical theories, however, are accepted not because they are falsifiable--they are not--or because they match the data--many alternative theories also match the data--but because they are simple in a certain sense. For example, the theory of gravitation is induced from locally observable training examples such as falling apples and movements of distant light sources, presumably stars. The theory predicts that apples on distant planets in other galaxies will fall as well. Currently nobody is able to verify or falsify this. But everybody believes in it because this generalization step makes the theory simpler than alternative theories with separate laws for apples on other planets. The same holds for superstring theory [18] or Everett's many world theory [12], which presently also are neither verifiable nor falsifiable, yet offer comparatively simple explanations of numerous observations. In particular, most of Everett's postulated many worlds will remain unobservable forever, but the assumption of their existence simplifies the theory, thus making it more beautiful and acceptable.

In Sections 3 and 4 we have made the assumption that the probabilities of next events, given previous events, are (limit-)computable. Here we make a stronger assumption by adopting Zuse's thesis [75,76], namely, that the very universe is actually being computed deterministically, e.g., on a cellular automaton (CA) [68,70]. Quantum physics, quantum computation [3,10,38], Heisenberg's uncertainty principle and Bell's inequality [2] do not imply any physical evidence against this possibility, e.g., [66].

But then which is our universe's precise algorithm? The following method [48] does compute it:

Systematically create and execute all programs for a universal computer, such as a Turing machine or a CA; the first program is run for one instruction every second step on average, the next for one instruction every second of the remaining steps on average, and so on.
This method in a certain sense implements the simplest theory of everything: all computable universes, including ours and ourselves as observers, are computed by the very short program that generates and executes all possible programs [48]. In nested fashion, some of these programs will execute processes that again compute all possible universes, etc. [48]. Of course, observers in ``higher-level'' universes may be completely unaware of observers or universes computed by nested processes, and vice versa. For example, it seems hard to track and interpret the computations performed by a cup of tea.

The simple method above is more efficient than it may seem at first glance. A bit of thought shows that it even has the optimal order of complexity. For example, it outputs our universe history as quickly as this history's fastest program, save for a (possibly huge) constant slowdown factor that does not depend on output size.

Nevertheless, some universes are fundamentally harder to compute than others. This is reflected by the Speed Prior $S$ discussed above (Section 5). So let us assume that our universe's history is sampled from $S$ or a less dominant prior reflecting suboptimal computation of the history. Now we can immediately predict:

1. Our universe will not get many times older than it is now [50] -- essentially, the probability that it will last $2^n$ times longer than it has lasted so far is at most $2^{-n}$.

2. Any apparent randomness in any physical observation must be due to some yet unknown but fast pseudo-random generator PRG [50] which we should try to discover. 2a. A re-examination of beta decay patterns may reveal that a very simple, fast, but maybe not quite trivial PRG is responsible for the apparently random decays of neutrons into protons, electrons and antineutrinos. 2b. Whenever there are several possible continuations of our universe corresponding to different Schrödinger wave function collapses -- compare Everett's widely accepted many worlds hypothesis [12] -- we should be more likely to end up in one computable by a short and fast algorithm. A re-examination of split experiment data involving entangled states such as the observations of spins of initially close but soon distant particles with correlated spins might reveal unexpected, nonobvious, nonlocal algorithmic regularity due to a fast PRG.

3. Large scale quantum computation [3] will not work well, essentially because it would require too many exponentially growing computational resources in interfering ``parallel universes'' [12].

4. Any probabilistic algorithm depending on truly random inputs from the environment will not scale well in practice.

Prediction 2 is verifiable but not necessarily falsifiable within a fixed time interval given in advance. Still, perhaps the main reason for the current absence of empirical evidence in this vein is that few [11] have looked for it.

In recent decades several well-known physicists have started writing about topics of computer science, e.g., [38,10], sometimes suggesting that real world physics might allow for computing things that are not computable traditionally. Unimpressed by this trend, computer scientists have argued in favor of the opposite: since there is no evidence that we need more than traditional computability to explain the world, we should try to make do without this assumption, e.g., [75,76,13,48].


next up previous
Next: Optimal Rational Decision Makers Up: The New AI: General Previous: Computable Predictions through the
Juergen Schmidhuber 2003-11-27