next up previous
Next: EXPERIMENT 4 - stock Up: EXPERIMENTAL RESULTS Previous: EXPERIMENT 2 - recurrent


EXPERIMENT 3 - stock market prediction (1).

Task. We predict the DAX1(the German stock market index) using fundamental indicators. Following Rehkugler and Poddig (1990), the net sees the following indicators: (a) German interest rate (``Umlaufsrendite''), (b) industrial production divided by money supply, (c) business sentiments (``IFO Geschäftsklimaindex''). The input (scaled in the interval [-3.4,3.4]) is the difference between data from the current quarter and last year's corresponding quarter. The goal is to predict the sign of next year's corresponding DAX difference.

Details. The training set consists of 24 data vectors from 1966 to 1972. Positive DAX tendency is mapped to target 0.8, otherwise the target is -0.8. The test set consists of 68 data vectors from 1973 to 1990. Flat minimum search (FMS) is compared against: (1) Conventional backprop (BP8) with 8 hidden units, (2) Backprop with 4 hidden units (BP4) (4 hidden units are chosen because pruning methods favor 4 hidden units, but 3 is not enough), (3) Optimal brain surgeon (OBS; Hassibi & Stork, 1993), ) with a few improvements (see section 5.6), (4) Weight decay (WD) according to Weigend et. al (1991) (WD and OBS were chosen because they are well-known and widely used).

Performance measure. Since wrong predictions lead to loss of money, performance is measured as follows. The sum of incorrectly predicted DAX changes is subtracted from the sum of correctly predicted DAX changes. The result is divided by the sum of absolute DAX changes.

Results. See table 2. Our method outperforms the other methods.

MSE is irrelevant. Note that MSE is not a reasonable performance measure for this task. For instance, although FMS typically makes more correct classifications than WD, FMS' MSE often exceeds WD's. This is because WD's wrong classifications tend to be close to 0, while FMS often prefers large weights yielding strong output activations -- FMS' few false classifications tend to contribute a lot to MSE.


Table 2: Comparisons of conventional backprop (BP4, BP8), optimal brain surgeon (OBS), weight decay (WD), and flat minimum search (FMS). All nets except BP4 start out with 8 hidden units. Each value is a mean of 7 trials. Column ``MSE'' shows mean squared error. Column ``w'' shows the number of pruned weights, column ``u'' shows the number of pruned units, the final 3 rows (``max'', ``min'', ``mean'') list maximal, minimal and mean performance (see text) over 7 trials. Note that test MSE is insignificant for performance evaluations (this is due to targets 0.8/-0.8, as opposed to the ``real'' DAX targets). Our method outperforms all other methods.
Method train test removed performance
  MSE MSE w u max min mean
BP8 0.003 0.945     47.33 25.74 37.76
BP4 0.043 1.066     42.02 42.02 42.02
OBS 0.089 1.088 14 3 48.89 27.17 41.73
WD 0.096 1.102 22 4 44.47 36.47 43.49
FMS 0.040 1.162 24 4 47.74 39.70 43.62


Parameters:
Learning rate: 0.01.
Architecture: (3-8-1), except BP4 with (3-4-1).
Number of training examples: 20,000,000.
Method specific parameters:
FMS: $E_{tol} = 0.13$; $\Delta \lambda = 0.001$.
WD: like with FMS, but $w_0 = 0.2$.
OBS: $E_{tol} = 0.015$ (the same result was obtained with higher $E_{tol}$ values, e.g. 0.13).
See section 5.6 for parameters common to all experiments.


next up previous
Next: EXPERIMENT 4 - stock Up: EXPERIMENTAL RESULTS Previous: EXPERIMENT 2 - recurrent
Juergen Schmidhuber 2003-02-13


Back to Financial Forecasting page