In the past year the author frequently fielded questions about the Gödel machine. Here a list of answers to typical ones.
A: Yes, it does. We just need to insert into the standard axioms for representing uncertainty and for dealing with probabilistic settings and expected rewards etc. Compare items 1d and 1c in Section 3.2, and the definition of utility as an expected value in equation (1). Also note that the machine learning literature is full of human-generated proofs of properties of methods for dealing with stochastic environments.
A: This is fully taken care of. Please have a look once more at the proof of Theorem 4.1, and note that the first self-change will be executed only if it is provably useful (in the sense of the present untility function ) for all future self-changes (for which the present self-change is setting the stage). This is actually one of the main points of the whole self-referential set-up.
A: The beautiful thing is that all meta-levels are automatically collapsed into one: any proof of a target theorem automatically proves that the corresponding self-modification is good for all further self-modifications affected by the present one, in recursive fashion. Recall Section 4.3.
A: Many physicists and other scientists (exceptions: [58,39]) actually do assume the real world makes use of all the real numbers, most of which are incomputable. Nevertheless, theorems and proofs are just finite symbol strings, and all treatises of physics contain only computable axioms and theorems, even when some of the theorems can be interpreted as making statements about uncountably many objects, such as all the real numbers. (Note though that the Löwenheim-Skolem Theorem [28,51] implies that any first order theory with an uncountable model such as the real numbers also has a countable model.) Generally speaking, formal descriptions of non-computable objects do not at all present a fundamental problem--they may still allow for finding a strategy that provably maximizes utility. If so, a Gödel machine can exploit this. If not, then humans will not have a fundamental advantage over Gödel machines.
A: More and more important mathematical proofs (four color theorem etc) heavily depend on automated proof search. And traditional theorem provers do not even make use of our novel notions of proof techniques and -optimal proof search. Of course, some proofs are indeed hard to find, but here humans and Gödel machines face the same fundamental limitations.
A: No, they do not. They refer to the very special case of problems sampled from i.i.d. uniform distributions on finite problem spaces. See the discussion of no free lunch theorems in an earlier paper [47].
A: No, it cannot. It should be obvious that rewrites of the utility function can happen only if the Gödel machine first can prove that the rewrite is useful according to the present utility function.
A: Not at all. Of course, the Gödel machine cannot profit from a hypothetical useful self-improvement whose utility is undecidable, and will therefore simply ignore it. Compare Section 2.4 on fundamental limitations of Gödel machines (and humans, for that matter). Nevertheless, unlike previous methods, a Gödel machine can in principle exploit at least the provably good improvements and speed-ups of any part of its initial software.