FAST: The Most Efficient Way of Computing Everything

Subsection 6.1 focused on SIMPLE ``steps'' allocated for instructions of single string-generating algorithms. Note that each such step may require numerous ``micro-steps'' for the computational overhead introduced by the need for organizing internal storage. For example, quickly growing space requirements for storing all strings may force a dovetailing TM to frequently shift its writing and scanning heads across large sections of its internal tapes. This may consume more time than necessary.

To overcome potential slow-downs of this kind, and to optimize the
TM-specific ``constant factor,'' we will slightly modify an
optimal search algorithm
called *``Levin search''* [#!Levin:73!#,#!Levin:84!#,#!Adleman:79!#,#!LiVitanyi:97!#]
(see [#!Schmidhuber:97nn!#,#!Wiering:96levin!#,#!Schmidhuber:97bias!#]
for the first practical applications we are aware of).
Essentially, we will strip Levin search of its search aspects and apply it
to possibly infinite objects. This leads to the most efficient (up
to a constant factor depending on the TM) algorithm for computing all
computable bitstrings.

Following Levin [#!Levin:73!#], within 2FAST Algorithm: For perform PHASEi:

PHASEi: Execute 2^{i - l(p)}instructions of all program prefixespsatisfying , and sequentially write the outputs on adjacent sections of the output tape, separated by blanks.

where program prefix

One difference between SIMPLE and **FAST** is that SIMPLE may
allocate steps to algorithms with a short description less frequently
than **FAST**. Suppose no finite algorithm computes *x* faster than
*p*^{k} which needs at most *f*(*n*) instructions for *x*_{n}, for all *n*.
While SIMPLE needs
2^{k+1} *f*(*n*) steps to compute *x*_{n}, following
Levin [#!Levin:73!#] it can be shown that **FAST** requires at most
2^{K(pk)+1}*f*(*n*) steps -- compare [#!LiVitanyi:97!#, p. 504 ff].
That is, SIMPLE and **FAST** share the same
order of time complexity (ignoring SIMPLE's ``micro-steps'' for storage
organization), but **FAST**'s constant factor tends to be better.

Note that an observer *A* evolving in one of the universes
computed by **FAST** might decide to build a machine that simulates all
possible computable universes using **FAST**, and so on, recursively.
Interestingly, this will not necessarily cause a dramatic exponential
slowdown: if the *n*-th discrete time step of *A*'s universe
(compare Example 1.1) is computable within *O*(*n*) time then
*A*'s simulations can be as fast as the ``original'' simulation,
save for a constant factor. In this sense a ``Great Programmer''
[#!Schmidhuber:97brauer!#] who writes a program that runs all possible
universes would not be superior to certain nested Great Programmers
inhabiting his universes.

To summarize: the effort required for computing all computable objects
simultaneously does not prevent **FAST** from computing each object
essentially as quickly as its fastest algorithm. No other dovetailer
can have a better order of computational complexity. This suggests a
notion of describability that is much more restricted yet perhaps much
more natural than the one used in the earlier sections on description
size-based complexity.

Related links: In the beginning was the code! - Zuse's thesis - Life, the universe, and everything - Generalized Algorithmic Information - Speed Prior - The New AI