Summary / Conclusion

Given is an arbitrary formalizable problem whose solution may require
interaction with a possibly reactive environment. For example, the goal
may be to maximize the future expected reward of a robot.
While executing its initial problem solving strategy, the Gödel machine
simultaneously runs a proof searcher which systematically and repeatedly
tests proof techniques. Proof techniques are programs that may read any
part of the Gödel machine's state, and write on a reserved part which may be reset
for each new proof technique test. In our example Gödel machine this writable
storage includes the variables *proof* and *switchprog*, where
*switchprog* holds a potentially unrestricted program whose execution
could completely rewrite any part of the Gödel machine's current software.
Normally the current *switchprog* is not executed. However, proof
techniques may invoke a special subroutine *check()* which tests
whether *proof* currently holds a proof showing that the utility of
stopping the systematic proof searcher and transferring control to the
current *switchprog* at some point in the near future exceeds the
utility of continuing the search until some alternative *switchprog*
is found. Such proofs are derivable from the proof searcher's axiom
scheme which formally describes the utility function to be maximized
(typically the expected future reward in the expected remaining lifetime
of the Gödel machine), the computational costs of hardware instructions (from which
all programs are composed), and the effects of hardware instructions on
the Gödel machine's state. The axiom scheme also formalizes known probabilistic
properties of the possibly reactive environment, and also the *initial* Gödel machine state and software, which includes the axiom scheme itself
(no circular argument here). Thus proof techniques can reason about
expected costs and results of all programs including the proof searcher.

Once *check()* has identified a provably
good *switchprog*, the latter is executed
(some care has to be taken here because the proof
verification itself and the transfer of control to *switchprog*
also consume part of the typically limited lifetime).
The discovered *switchprog* represents a *globally*
optimal self-change in the following sense:
provably *none* of all the alternative *switchprog*s
and *proof*s (that could be found in the future
by continuing the proof search) is worth
waiting for.

There are many ways of initializing the
proof searcher. Although identical proof techniques may
yield different proofs depending on the time
of their invocation (due to the continually
changing Gödel machine state), there are bias-optimal and
asymptotically optimal proof searcher initializations
based on variants of universal search [22]
and the *Optimal Ordered Problem Solver*
[38,40]
(Section 2.3).

Our Gödel machine will never get worse than its initial problem
solving strategy, and has a chance of getting much better,
provided the nature of the given problem allows for a provably useful
rewrite of the initial strategy, or of the proof searcher.
The Gödel machine may be viewed as a self-referential universal problem solver
that can formally talk about itself, in particular about its
performance. It may `step outside of itself' [13] by
rewriting its axioms and utility function or augmenting its hardware,
provided this is provably useful.
Its conceptual simplicity notwithstanding,
the Gödel machine explicitly addresses the
*`Grand Problem of Artificial Intelligence'*
[43]
by optimally dealing with limited resources in general environments,
and with the possibly huge (but constant) slowdowns buried
by previous approaches
[16,15]
in the widely used but often misleading
-notation of theoretical computer science.

No complex proofs were necessary in this paper as the Gödel machine is designed precisely such that we may leave all complex proofs to the Gödel machine itself. Its main limitation is that it cannot profit from self-improvements whose usefulness it cannot prove in time.

Back to Goedel Machine Home Page