next up previous
Next: Conclusion Up: Discussion & Previous Work Previous: Gödel Machines and Consciousness


Frequently Asked Questions

In the past year the author frequently fielded questions about the Gödel machine. Here a list of answers to typical ones.

  1. Q: Does the exact business of formal proof search really make sense in the uncertain real world?

    A: Yes, it does. We just need to insert into $p(1)$ the standard axioms for representing uncertainty and for dealing with probabilistic settings and expected rewards etc. Compare items 1d and 1c in Section 3.2, and the definition of utility as an expected value in equation (1). Also note that the machine learning literature is full of human-generated proofs of properties of methods for dealing with stochastic environments.

  2. Q: The target theorem (2) seems to refer only to the very first self-change, which may completely rewrite the proof-search subroutine--doesn't this make the proof of Theorem 4.1 invalid? What prevents later self-changes from being destructive?

    A: This is fully taken care of. Please have a look once more at the proof of Theorem 4.1, and note that the first self-change will be executed only if it is provably useful (in the sense of the present untility function $u$) for all future self-changes (for which the present self-change is setting the stage). This is actually one of the main points of the whole self-referential set-up.

  3. Q (related to the previous item): The Gödel machine implements a meta-learning behavior: what about a meta-meta, and a meta-meta-meta level?

    A: The beautiful thing is that all meta-levels are automatically collapsed into one: any proof of a target theorem automatically proves that the corresponding self-modification is good for all further self-modifications affected by the present one, in recursive fashion. Recall Section 4.3.

  4. Q: The Gödel machine software can produce only computable mappings from input sequences to output sequences. What if the environment is non-computable?

    A: Many physicists and other scientists (exceptions: [58,39]) actually do assume the real world makes use of all the real numbers, most of which are incomputable. Nevertheless, theorems and proofs are just finite symbol strings, and all treatises of physics contain only computable axioms and theorems, even when some of the theorems can be interpreted as making statements about uncountably many objects, such as all the real numbers. (Note though that the Löwenheim-Skolem Theorem [28,51] implies that any first order theory with an uncountable model such as the real numbers also has a countable model.) Generally speaking, formal descriptions of non-computable objects do not at all present a fundamental problem--they may still allow for finding a strategy that provably maximizes utility. If so, a Gödel machine can exploit this. If not, then humans will not have a fundamental advantage over Gödel machines.

  5. Q: Isn't automated theorem-proving very hard? Current AI systems cannot prove nontrivial theorems without human intervention at crucial decision points.

    A: More and more important mathematical proofs (four color theorem etc) heavily depend on automated proof search. And traditional theorem provers do not even make use of our novel notions of proof techniques and $O()$-optimal proof search. Of course, some proofs are indeed hard to find, but here humans and Gödel machines face the same fundamental limitations.

  6. Q: Don't the ``no free lunch theorems'' [57] say that it is impossible to construct universal problem solvers?

    A: No, they do not. They refer to the very special case of problems sampled from i.i.d. uniform distributions on finite problem spaces. See the discussion of no free lunch theorems in an earlier paper [47].

  7. Q: Can't the Gödel machine switch to a program switchprog that rewrites the utility function to a ``bogus'' utility function that makes unfounded promises of big rewards in the near future?

    A: No, it cannot. It should be obvious that rewrites of the utility function can happen only if the Gödel machine first can prove that the rewrite is useful according to the present utility function.

  8. Q: Aren't there problems with undecidability? For example, doesn't Rice's theorem [32] or Blum's speed-up theorem [3,4] pose problems?

    A: Not at all. Of course, the Gödel machine cannot profit from a hypothetical useful self-improvement whose utility is undecidable, and will therefore simply ignore it. Compare Section 2.4 on fundamental limitations of Gödel machines (and humans, for that matter). Nevertheless, unlike previous methods, a Gödel machine can in principle exploit at least the provably good improvements and speed-ups of any part of its initial software.


next up previous
Next: Conclusion Up: Discussion & Previous Work Previous: Gödel Machines and Consciousness
Juergen Schmidhuber 2005-01-03