next up previous
Next: WRITES WITH 2 ARGUMENTS Up: ``SIMPLE'' NEURAL NETS Previous: TASK 1: COUNTING INPUTS

TASK 2: ADDING INPUT POSITIONS

The task. We use the same perceptron-like network and the same input data as above. The goal is different and harder to achieve, however. The task is to find weights such that $y^p$ equals the sum of the positions of on-bits in $x^p$, for all ${100 \choose 3} = 161,700$ possible $x^p$. Again, the task will be made difficult by providing only very limited training data.

The solution. The only solution to the problem is: make all $w_i$ equal to $i$. Like with the example above, there are short and fast programs for computing the solution.

The training data. The 3 training inputs $x^1$, $x^2$, and $x^3$ from the previous task are used. The target values are different, however. Obviously, the target for input vector $x^1$ is 108. The target for input vector $x^2$ is 126. The target for input vector $x^3$ is 221. Again, success is binary: only if the solution candidate fits the 3 training examples exactly, the solution is evaluated on the test data.

Why conventional neural net algorithms fail to solve this problem: for the same reasons they fail to solve the previous problem (see section 4.1). The training set is too small to obtain reasonable generalization on the test set.

RESULTS. Programs generating networks fitting the training data were found in 10 out of $5.5*10^{7}$ runs, using up a total search time of $8.14*10^{8}$ time steps. 8 of the 10 successful runs led to perfect generalization on the 161,697 unseen test examples. Typical programs used Allocate, Increment, WriteWeight, and (conditional) jumps to generate correct solutions.

Solution example. The first weight vector fitting the training data was found after 6,902,963 runs. The corresponding program was a pretty wild one. But it led to perfect generalization on all the test data. Before halting, the program used 502 out of 8192 allocated time steps. Its time probability was $2^{-8}$. Its space probability was $3.92*10^{-16}$. What the program does is this (statements are marked with their addresses):

(0) Allocate 3 cells on the work tape. Initialize with zero. Set Min = Min - 3.

(2) Get the contents of the input field (see list of instructions in section 3) at position 11 (which is 0), and write it into address -2.

(5) Write the contents of address -3 onto the weight pointed to by WeightPointer and increment WeightPointer. Halt if WeightPointer is out of range.

(7) If the contents of address -3 is less or equal to the contents of address 9, goto address 11. Otherwise goto address 11.

(11) Increment the contents of address -3.

(13) Goto address -1.

(-1) If the contents of address 7 is less or equal to the contents of address 3 (always true), goto address 5.

The instructions beginning at the addresses (2), (7), and (-1) are useless. But at least they are not catastrophic. Essentially, the program first allocates space for a variable (initially zero) on the work tape (recall that the program tape is ``read/execute'' only, and cannot be used for variables). Then it executes a loop for incrementing and writing the variable contents onto the network's weight vector. See (Schmidhuber, 1994a) for additional details, and for more elegant alternative solutions found by the system.


next up previous
Next: WRITES WITH 2 ARGUMENTS Up: ``SIMPLE'' NEURAL NETS Previous: TASK 1: COUNTING INPUTS
Juergen Schmidhuber 2003-02-25


Back to Optimal Universal Search page
Back to Program Evolution page
Back to Algorithmic Information page
Back to Speed Prior page