Note: this section can be skipped by readers who are interested
in the theoretical framework only, not in its potential implications
for the real world.
Virtual realities used for pilot training
or video games are rapidly becoming more and more convincing,
as each decade computers are getting roughly 1000 times faster
per dollar -- a consequence of Moore's law first formulated in 1965.
At the current pace, however, the Bremermann
limit [2]
of roughly
operations per second, on not more than
bits for the ``ultimate laptop'' [16] with 1 kg of
mass and 1 liter of volume,
will not be reachable within this century, and there is no
obvious reason why Moore's law should break down any time soon. Thus a simple
extrapolation has led many to predict that within a few decades computers will
match brains in terms of raw computing power, and that
soon there will be reasonably complex virtual worlds inhabited
by reasonably complex virtual beings.
In the past decades numerous science fiction authors have anticipated this trend in novels about simulated humans living on sufficiently fast digital machines, e.g., [7]. But even serious and reputable computer pioneers have suggested that the universe essentially is just a computer. In particular, the ``inventor of the computer'' Konrad Zuse not only created the world's first binary machines in the 1930s, the first working programmable computer in 1941, and the first higher-level programming language around 1945, but also introduced the concept of Computing Space (Rechnender Raum), suggesting that all physical events are just results of calculations on a grid of numerous communicating processors [28]. Even earlier, Gottfried Wilhelm von Leibniz (who not only co-invented calculus but also built the first mechanical multiplier in 1670) caused a stir by claiming that everything is computable (compare C. Schmidhuber's concept of the mathscape [18]).
So it does not seem entirely ludicrous to study consequences of the
idea that we are really living in a ``simulation,'' one
that is real enough to
make many of its ``inhabitants'' smile at the mere thought of being computed.
In absence of contrarian evidence, let us assume for a moment
that the physical
world around us is indeed generated by a computational process, and that any
possible sequence of observations is therefore computable in the limit
[22].
For example, let be an infinite sequence of finite bitstrings
representing the history of some discrete universe,
where
represents the state of the universe at discrete time step
, and
the ``Big Bang'' [20].
Suppose there is a finite algorithm
that computes
(
) from
and additional information
(this may
require numerous computational steps of
, that is, ``local'' time
of the universe may run comparatively slowly). Assume that
is not truly random but calculated by invoking a finite pseudorandom
generator subroutine. Then
has a finite constructive description
and is computable in the limit.
Contrary to a widely spread misunderstanding, quantum physics and Heisenberg's uncertainty principle do not rule out such pseudorandomness in the apparently random or noisy physical observations -- compare reference [26] by 't Hooft (physics Nobel prize 1999).
If our computability assumption holds then in general we cannot know which machine is used to compute the data. But it seems plausible to assume that it does suffer from a computational resource problem, that is, the a priori probability of investing resources into any computation tends to decrease with growing computational costs.
To evaluate the plausibility of this, consider that most data generated on your own computer are computable within a few microseconds, some take a few seconds, few take hours, very few take days, etc... Similarly, most files on your machine are small, few are large, very few are very large. Obviously, anybody wishing to become a ``God-like Great Programmer'' by programming and simulating universes [20] will have a strong built-in bias towards easily computable ones. This provokes the notion of a ``Frugal Creator'' (Leonid Levin, personal communication, 2001).
The reader will have noticed that
this line of thought leads straight to the Speed Prior discussed in
the previous sections. It may even lend some additional motivation to
.
S-based Predictions.
Now we are ready for an extreme application. Assuming that
the entire history of our universe is sampled from or a less
dominant prior reflecting
suboptimal computation of the history, we can immediately predict:
1.
Our universe will not get many times older than it is now
[22] -- the probability
that its history will extend beyond the one computable
in the current phase of FAST (that is, it will be prolongated into
the next phase) is at most 50 %; infinite futures have measure zero.
2.
Any apparent randomness
in any physical observation
must be due to some yet unknown but
fast pseudo-random generator PRG [22] which we
should try to discover.
2a.
A re-examination of beta decay patterns may reveal that
a very simple, fast, but maybe not quite trivial
PRG is responsible for the apparently random decays of neutrons
into protons, electrons and antineutrinos.
2b.
Whenever there are several possible continuations of our
universe corresponding to different Schrödinger wave function
collapses -- compare Everett's widely accepted many worlds hypothesis
[5] -- we should be more likely to end up in one computable
by a short and fast algorithm. A re-examination of split experiment
data involving entangled states such as the observations of spins of
initially close but soon distant particles with correlated spins might
reveil unexpected, nonobvious, nonlocal algorithmic regularity due to
a fast PRG.
3.
Large scale quantum computation [1] will not
work well, essentially because it would require too many
exponentially growing computational
resources in interfering ``parallel universes'' [5].
Prediction 2 is verifiable but not necessarily falsifiable within a fixed time interval given in advance. Still, perhaps the main reason for the current absence of empirical evidence in this vein is that nobody has systematically looked for it yet.
The broader context.
The concept of dominance is useful for predicting prediction quality.
Let denote the history of a universe.
If
is sampled from an enumerable or recursive prior then
-based
prediction will work well, since
dominates all enumerable priors.
If
is sampled from an even more dominant cumulatively enumerable
measure CEM [22] then we may use
the fact that there is a universal CEM
that dominates
and all other CEMs [22,23].
Using Hutter's loss bounds [9] we obtain
a good
-based predictor. Certain even
more dominant priors [22,23]
also allow for nonrecursive optimal predictions computable in the limit.
The price to pay for recursive computability of
-based inference is the loss of dominance
with respect to
,
, etc.
The computability assumptions embodied by the various priors mentioned above add predictive power to the anthropic principle (AP) [3] which essentially just says that the conditional probability of finding oneself in a universe compatible with one's existence will always remain 1 -- the AP by itself does not allow for any additional nontrivial predictions.