next up previous
Next: CONCLUSION Up: ILLUSTRATIVE EXPERIMENTS Previous: STEREO TASK

FINDING PREDICTABLE DISTRIBUTED REPRESENTATIONS

Two properties of some binary input vector are the truth values of the following expressions:

1) There are more `ones' on the `right' side of the input vector than on the `left' side.

2) The input vector consists of more `ones' than `zeros'.

During one learning cycle, a randomly chosen legal input vector was presented to $T_1$, another input vector randomly chosen among those with the feature combination of the first one was presented to $T_2$. $T_1$ and $T_2$ were constrained to have the same weights. Input vectors with equal numbers of ones and zeros as well as input vectors with equal numbers of ones on both sides were excluded.

We minimized (4) with $D_l$ defined by an auto-encoder (equation (7)). Ten test runs involving 15,000 pattern presentations were conducted. The system always came up with a distributed near-binary representation of the possible feature combinations.

With $D_l$ defined by modified predictability minimization (equation (9)), with simultaneous training of both predictors and classifiers, ten test runs involving 10,000 pattern presentations were conducted. Again, the system always learned to extract the two features.


next up previous
Next: CONCLUSION Up: ILLUSTRATIVE EXPERIMENTS Previous: STEREO TASK
Juergen Schmidhuber 2003-02-13