next up previous
Next: Asymptotically Fastest Nonincremental Problem Up: Survey of Universal Search Previous: Bias-Optimality


Near-Bias-Optimal Nonincremental Universal Search

The following straight-forward method (sometimes referred to as Levin Search or LSEARCH) is near-bias-optimal. For simplicity, we notationally suppress conditional dependencies on the current problem. Compare Levin ([30,32] --Levin also attributes similar ideas to Adleman), [69], [65], [33], [20].

Method 2.1 (LSEARCH)   Set current time limit T=1. WHILE problem not solved DO:
Test all programs $q$ such that $t(q)$, the maximal time spent on creating and running and testing $q$, satisfies $t(q) < P(q)~T$.

Set $T := 2 T.$

Asymptotic optimality. Clearly, LSEARCH has the optimal order of computational complexity: Given some problem class, if some unknown optimal program $p$ requires $f(k)$ steps to solve a problem instance of size $k$, then LSEARCH will need at most $O(P(p) f(k)) = O(f(k))$ steps -- the constant factor $P(p)$ may be huge but does not depend on $k$.

The near-bias-optimality of LSEARCH is hardly affected by the fact that for each value of $T$ we repeat certain computations for the previous value. Roughly half the total search time is still spent on $T$'s maximal value (ignoring hardware-specific overhead for parallelization and nonessential speed-ups due to halting programs if there are any). Note also that the time for testing is properly taken into account here: any result whose validity is hard to test is automatically penalized.

Nonuniversal variants. LSEARCH provides inspiration for nonuniversal but very practical methods which are optimal with respect to a limited search space, while suffering only from very small slowdown factors. For example, designers of planning procedures often just face a binary choice between two options such as depth-first and breadth-first search. The latter is often preferrable, but its greater demand for storage may eventually require to move data from on-chip memory to disk. This can slow down the search by a factor of 10,000 or more. A straightforward solution in the spirit of LSEARCH is to start with a 50 % bias towards either technique, and use both depth-first and breadth-first search in parallel -- this will cause a slowdown factor of at most 2 with respect to the best of the two options (ignoring a bit of overhead for parallelization). Such methods have presumably been used long before Levin's 1973 paper. [80] and [65] used rather general but nonuniversal variants of LSEARCH to solve machine learning toy problems unsolvable by traditional methods. Probabilistic alternatives based on probabilistically chosen maximal program runtimes in Speed-Prior style [54,58] also outperformed traditional methods on toy problems [52,53].


next up previous
Next: Asymptotically Fastest Nonincremental Problem Up: Survey of Universal Search Previous: Bias-Optimality
Juergen Schmidhuber 2004-04-15

Back to OOPS main page