next up previous
Next: DETAILS / PARAMETERS Up: EXPERIMENTAL RESULTS Previous: EXPERIMENT 4 - stock

EXPERIMENT 5 - stock market prediction (3).

Task. This time, we predict the DAX using weekly technical (as opposed to fundamental) indicators. The data (DAX values and 35 technical indicators) was provided by Bayerische Vereinsbank.

Data analysis. To analyze the data, we computed: (1) The pairwise correlation coefficients of the 35 technical indicators. (2) The maximal pairwise correlation coefficients of all indicators and all linear combinations of two indicators. This analysis revealed that only 4 indicators are not highly correlated. For such reasons, our nets see only the 8 most recent DAX-changes and the following technical indicators: (a) the DAX value, (b) change of 24-week relative strength index (``RSI'') - the relation of increasing tendency to decreasing tendency, (c) ``5 week statistic'', (d) ``MACD'' (smoothened difference of exponentially weighted 6 week and 24 week DAX).

Input data. The final network input is obtained by scaling the values (a-d) and the 8 most recent DAX-changes in $[-2,2]$. The training set consists of 320 data points (July 1985 to August 1991). The targets are the actual DAX changes scaled in $[-1,1]$.

Comparison. The following methods are applied to the training set: (1) Conventional backprop (BP), (2) optimal brain surgeon / optimal brain damage (OBS/OBD), (3) weight decay (WD) according to Weigend et al., (4) flat minimum search (FMS). The resulting nets are evaluated on a test set consisting of 100 data points (August 1991 to July 1993).

Performance is measured like in section 5.3.

Results. Table 4 shows the results. Again, our method outperforms the other methods.


Table 4: Comparisons of conventional backprop (BP), optimal brain surgeon (OBS), weight decay (WD), flat minimum search (FMS). All nets start out with 9 hidden units. Each value is a mean of 10 trials. Column ``MSE'' shows mean squared error. Column ``w'' shows the number of pruned weights, column ``u'' shows the number of pruned units, the final 3 rows (``max'', ``min'', ``mean'') list maximal, minimal and mean performance (see text) over 10 trials (note again that MSE is an irrelevant performance measure for this task). Flat minimum search outperforms all other methods.
Method train test removed performance
  MSE MSE w u max min mean
BP 0.13 1.08     28.45 -16.7 8.08
OBS 0.38 0.912 55 1 27.37 -6.08 10.70
WD 0.51 0.334 110 8 26.84 -6.88 12.97
FMS 0.46 0.348 103 7 29.72 18.09 21.26


Parameters:
Learning rate: 0.01.
Architecture: (12-9-1).
Training time: 10,000,000 examples.
Method specific parameters:
OBS/OBD: $E_{tol} = 0.34$.
FMS: $E_{tol} = 0.34$; $\Delta \lambda = 0.003$. If $E_{\mbox{{\scriptsize average}}} <
E_{tol}$ then $\Delta \lambda$ is set to 0.03.
WD: like with FMS, but $w_0 = 0.2$.
See section 5.6 for parameters common to all experiments.


next up previous
Next: DETAILS / PARAMETERS Up: EXPERIMENTAL RESULTS Previous: EXPERIMENT 4 - stock
Juergen Schmidhuber 2003-02-13


Back to Financial Forecasting page