








Jürgen Schmidhuber's

. 

Computable Universes &
Algorithmic Theory of Everything


Talk slides

Schmidhuber's papers on computable universes:
[1]
A Computer Scientist's View of Life, the Universe, and Everything.
LNCS 201288, Springer, 1997.
HTML.
PS.
PS.GZ.
PDF.
(Later copy in ArXiv.)
[2]
Algorithmic theories of everything
(2000).
PDF.
PS.GZ.
HTML.
ArXiv:
quantph/ 0011122.
[3]
Hierarchies of generalized Kolmogorov complexities and
nonenumerable universal measures computable in the limit.
International Journal of Foundations of Computer Science 13(4):587612, 2002.
PDF.
PS.
Based on sections 25 of ref [2] above.
[4]
The Speed Prior: A New Simplicity Measure
Yielding NearOptimal Computable Predictions.
In J. Kivinen and R. H. Sloan, editors, Proceedings of the 15th
Annual Conference on Computational Learning Theory (COLT 2002), Sydney, Australia,
Lecture Notes in Artificial Intelligence, pages 216228. Springer, 2002.
PS.
PDF.
HTML.
Based on section 6 of ref [2] above.
[5]
The New AI:
General & Sound & Relevant for Physics.
To appear in B. Goertzel and C. Pennachin, eds.:
Artificial General Intelligence (2006, in press).
PS.
PDF.
HTML.
ArXiv:
cs.AI/0302012.


Digital Physics:
Could it be that our universe is just the output of
a deterministic
computer program?
As a consequence of Moore's law, each decade computers are getting
roughly 1000 times
faster
by cost.
Apply Moore's law to the video game business. As the virtual
worlds get more convincing many people will spend more time
in them. Soon most universes will be virtual, only one (the
original) will be real. Then many will be led to suspect the
real one is a simulation as well.
Some are already suspecting this today.
Then the simplest explanation of our universe
is the simplest program that computes it. In 1997
Schmidhuber
pointed out [1] that the simplest such program
actually computes all possible universes with all
types of physical constants and laws, not just ours.
His essay also talks about universes
simulated within parent universes in nested fashion,
and about universal complexitybased measures on
possible universes.
Prior measure problem:
if every possible future exists, how can we predict anything?
Unfortunately, knowledge about the program that computes all
computable universes is
not yet sufficient to make good predictions about the future of
our own particular universe.
Some of our possible futures obviously are more likely than others.
For example, tomorrow the sun will probably shine in the Sahara desert.
To predict according to Bayes rule, what we need is a
prior probability distribution or measure on the possible futures.
Which one is the right one?
It is not the uniform one:
If all futures were equally
likely then our world might as well dissolve
right now. But it does not.
Some think the famous anthropic principle (AP) might help us here.
But it does neither, as will be seen next.
 
Zuse's thesis
vs
quantum physics?
Einstein (top)
always claimed that `God does not play dice,'
and back in 1969 Zuse (bottom)
published a book about
what's known today as
Zuse's thesis:
The universe is being deterministically computed on some sort of
giant but discrete computer  compare
PDF of MIT's translation (1970)
of Zuse's book (1969).
Contrary to common belief, Heisenberg's uncertainty
principle is
no physical evidence against
Zuse's thesis  Nobel laureate
't Hooft
agrees.


Most recent stuff (2006):
Letter:
nothing random in physics?
(Nature 439, 392, 26 Jan 2006)
American Scientist,
July 2006:
Review
of Seth Lloyd's book
Nov 67 2006:
Zuse Symposium, Berlin:
Is the universe a computer?
 

Anthropic Principle
does not help
The anthropic principle (AP)
essentially just says that the conditional probability of finding
oneself in a universe compatible with one's existence will always
remain 1. AP by itself
does not have any additional predictive power.
For example, it does not predict that tomorrow
the sun will shine in the Sahara,
or that gravity will work in quite the same way 
neither rain in the Sahara nor certain changes of gravity
would destroy us, and thus would be allowed by AP.
To make
nontrivial predictions about the future we need more than AP:
Predictions for universes sampled from
computable probability distributions
To make better predictions,
can we postulate
any reasonable nontrivial constraints on the
prior probability distribution on our possible futures?
Yes! The distribution should at least be computable in the
limit.
That is, there should be a program that takes
any beginning of any universe history as an input and
produces an output converging on the probabilities of the
next possible events. If there were no such program we could not
even formally specify our universe, leave alone writing
reasonable scientific papers about it.
It turns out that the very weak
assumption of a limitcomputable
probability distribution
is enough to make quite nontrivial predictions about
our own future. This is the topic of Schmidhuber's work (2000) on
Algorithmic Theories of Everything [2].
Sections 25 led to
generalizations of Kolmogorov's and Solomonoff's
complexity and probability measures [3,4].
 
. 

The fastest way of computing
all computable universes
The work mentioned above focused on description size and
completely ignored computation time.
From a pragmatic point of view, however,
time seems essential.
For example, in the near future
kids will find it natural to play God
or "Great Programmer" by creating
on their computers their own universes inhabited by simulated
observers. But since
most computable universes are hard to compute, and since
resources will always be limited despite faster and faster
hardware, selfappointed Gods will always have to
focus on relatively
few universes that are relatively easy to compute.
So which is the best universecomputing algorithm for any
decent "Great Programmer" with resource constraints? It turns out there
is an optimally fast algorithm which computes each universe
as quickly as this universe's unknown (!) fastest program,
save for a constant factor that does not depend on universe
size. In fact, this algorithm is essentially identical
to the one on page 1 of Schmidhuber's abovementioned
1997 paper [1].
Given any limited computer, the easily computable universes
will come out much faster than the others. Obviously, the first
observers to evolve in any universe will find themselves in
a quickly computable one.
 
Speed Prior
Former or later such observers will build their own nested virtual
worlds and extrapolate from there and get the idea their
"real" universe is a "simulation" as well, speculating
their God also suffers form resource constraints. This
will naturally lead them to the Speed Prior [2, 4] which assigns
low probability to universes that are hard to compute.
Then they will make
nontraditional
Speed Priorbased predictions
about their future. Shouldn't we do so, too?
Compare [4, 5].


