next up previous
Next: CONCLUSION Up: EXPERIMENTS Previous: Overview over experiments 2


EXPERIMENT 4: vowel recognition

Lococodes cannot only be justified by reference to previous ideas on what's a ``desirable'' code. Next we will show that they can help to achieve superior generalization performance on a standard supervised learning benchmark problem. This section's focus on speech data also illustrates LOCOCODES's versatility: its applicability is not limited to visual data.

Task. We recognize vowels, using vowel data from Scott Fahlman's CMU benchmark collection (see also Robinson 1989). There are 11 vowels and 15 speakers. Each speaker spoke each vowel 6 times. Data from the first 8 speakers is used for training. The other data is used for testing. This means 528 frames for training and 462 frames for testing. Each frame consists of 10 input components obtained by low pass filtering at 4.7kHz, digitized to 12 bits with a 10 kHz sampling rate. A twelfth order linear predictive analysis was carried out on six 512 sample Hamming-windowed segments from the steady part of the vowel. The reflection coefficients were used to calculate 10 log area parameters, providing the 10 dimensional input space.

Coding. The training data is coded using an FMS AA. Architecture: (10-30-10). The input components are linearly scaled in [-1,1]. The AA is trained with $10^7$ pattern presentations. Then its weights are frozen.

Classification. From now on, the vowel codes across all nonconstant HUs are used as inputs for a conventional supervised BP classifier, which is trained to recognize the vowels from the code. The classifier's architecture is (($30-c$)-11-11), where $c$ is the number pruned HUs in the AA. The hidden and output units are sigmoid with activation function $\frac{2}{1+\exp(-x)}-1$, and receive an additional bias input. The classifier is trained with another $10^7$ pattern presentations.

Parameters. AA net: learning rate: 0.02, $E_{tol} = 0.015$, $\Delta \lambda =
0.2$, $\gamma = 2.0$. Backprop classifier: learning rate: 0.002.

Overfitting. We confirm Robinson's results: the classifier tends to overfit when trained by simple BP -- during learning, the test error rate first decreases and then increases again.

Comparison. We compare: (1) Various neural nets (see Table 1). (2) Nearest neighbor: classifies an item as belonging to the class of the closest example in the training set (using Euclidean distance). (3) LDA: linear discriminant analysis. (4) Softmax: observation assigned to class with best fit value. (5) QAD: quadratic discriminant analysis (observations are classified as belonging to the class with closest centroid, using Mahalanobis distance based on the class-specific covariance matrix). (6) CART: classification and regression tree (coordinate splits and default input parameter values are used). (7) FDA/BRUTO: flexible discriminant analysis using additive models with adaptive selection of terms and splines smoothing parameters. BRUTO provides a set of basis functions for better class separation. (8) Softmax/BRUTO: best fit value for classification using BRUTO. (9) FDA/MARS: FDA using multivariate adaptive regression splines. MARS builds a basis expansion for better class separation. (10) Softmax/MARS: best fit value for classification using MARS. (11) LOCOCODE/Backprop: ``unsupervised'' codes generated by LOCOCODE with FMS, fed into a conventional, overfitting BP classifier.


Table 3: Vowel recognition task: generalization performance of different methods. Surprisingly, FMS-generated lococodes fed into a conventional, overfitting backprop classifier led to excellent results. See text for details.
  Technique nr. hidden error rates
    units training test
(1.1) Single-layer perceptron - - 0.67
(1.2.1) Multi-layer perceptron 88 - 0.49
(1.2.2) Multi-layer perceptron 22 - 0.55
(1.2.3) Multi-layer perceptron 11 - 0.56
(1.3.1) Modified Kanerva Model 528 - 0.50
(1.3.2) Modified Kanerva Model 88 - 0.57
(1.4.1) Radial Basis Function 528 - 0.47
(1.4.2) Radial Basis Function 88 - 0.52
(1.5.1) Gaussian node network 528 - 0.45
(1.5.2) Gaussian node network 88 - 0.47
(1.5.3) Gaussian node network 22 - 0.46
(1.5.4) Gaussian node network 11 - 0.53
(1.6.1) Square node network 88 - 0.45
(1.6.2) Square node network 22 - 0.49
(1.6.3) Square node network 11 - 0.50
(2) Nearest neighbor - - 0.44
(3) LDA - 0.32 0.56
(4) Softmax - 0.48 0.67
(5) QDA - 0.01 0.53
(6.1) CART - 0.05 0.56
(6.2) CART (linear comb. splits) - 0.05 0.54
(7) FDA / BRUTO - 0.06 0.44
(8) Softmax / BRUTO - 0.11 0.50
(9.1) FDA / MARS (degree 1) - 0.09 0.45
(9.2) FDA / MARS (degree 2) - 0.02 0.42
(10.1) Softmax / MARS (degree 1) - 0.14 0.48
(10.2) Softmax / MARS (degree 2) - 0.10 0.50
(11) LOCOCODE / Backprop 30/11 0.05 0.42


Results. See Table 3. FMS generates 3 different lococodes. Each is fed into 10 BP classifiers with different weight initializations: the table entry for ``LOCOCODE/Backprop'' represents the mean of 30 trials. The results for neural nets and nearest neighbor are taken from Robinson (1989). The other results (except for LOCOCODE's) are taken from Hastie et al. (1993). Our method led to excellent generalization results. The error rates after BP learning vary between 39 and 45 %.

Backprop fed with LOCOCODE code sometimes goes down to 38 % error rate, but due to overfitting, the error rate increases again (of course, test set performance may not influence the training procedure). Given that BP by itself is a very naive approach it seems quite surprising that excellent generalization performance can be obtained just by feeding BP with nongoal-specific lococodes.

Typical feature detectors. The number of pruned HUs (with constant activation) varies between 5 and 10. 2 to 5 HUs become binary, and 4 to 7 trinary. With all codes we observed: apparently, certain HUs become feature detectors for speaker identification. Another HU's activation is near 1.0 for the words ``heed'' and ``hid'' (``i'' sounds). Another HU's activation has high values for the words ``hod'', ``hoard'', ``hood'' and ``who'd'' (``o''-words) and low but nonzero values for ``hard'' and ``heard''. LOCOCODE supports feature detection.

Why no sparse code? The real-valued input components cannot be described precisely by the activations of the few feature detectors generated by LOCOCODE. Additional real-valued HUs are necessary for representing the missing information.

Better results with additional information. Hastie et al. also obtained additional, even slightly better results with an FDA/MARS variant: down to 39 % average error rate. It should be mentioned, however, that their data was subject to goal-directed preprocessing with splines, such that there were many clearly defined classes. Furthermore, to determine the input dimension, Hastie et al. used a special kind of generalized cross-validation error, where one constant was obtained by unspecified ``simulation studies''. Hastie and Tibshirani (1996) also obtained an average error rate of 38 % with discriminant adaptive nearest neighbor classification. About the same error rate was obtained by Flake (1998) with RBF networks and hybrid architectures. Also, recent experiments (mostly conducted during the time this paper has been under review) showed that even better results can be obtained by using additional context information to improve classification performance, e.g., Turney (1993), Herrmann (1997), and Tenenbaum and Freeman (1997). For an overview see Schraudolph (1998). It will be interesting to combine these methods with LOCOCODE.

Conclusion. Although we made no attempt at preventing classifier overfitting, we achieved excellent results. From this we conclude that the lococodes fed into the classifier already conveyed the ``essential'', almost noise-free information necessary for excellent classification. We are led to believe that LOCOCODE is a promising method for data preprocessing.


next up previous
Next: CONCLUSION Up: EXPERIMENTS Previous: Overview over experiments 2
Juergen Schmidhuber 2003-02-13


Back to Independent Component Analysis page.