Next: EXPERIMENT 5 - stock
Up: EXPERIMENTAL RESULTS
Previous: EXPERIMENT 3 - stock
Task.
We predict the DAX again,
using the basic set-up of the experiment in section 5.3.
However, the following modifications are introduced:
- There are two additional inputs: (d) dividend rate,
(c) foreign orders in manufacturing industry.
- Monthly predictions are made. The net input
is the difference between
the current month's data and last month's data.
The goal is to predict the sign of next month's corresponding DAX difference.
- There are 228 training examples and 100 test examples.
- The target is the percentage of DAX change scaled in the interval
[-1,1] (outliers are ignored).
- Performance of WD and FMS is also tested
on networks ``spoiled''
by conventional backprop (``WDR'' and ``FMSR'' - the ``R'' stands
for Retraining).
Results are shown in
table 3.
Average performance of our method
exceeds the ones of weight decay, OBS,
and conventional backprop.
Table 3 also shows
superior performance of our approach
when it comes to retraining
``spoiled'' networks
(note that OBS is a retraining method by nature).
FMS led to the best improvements
in generalization performance.
Table 3:
Comparisons of conventional backprop (BP),
optimal brain surgeon (OBS),
weight decay after spoiling the net with
BP (WDR), flat minimum search after spoiling the net with BP (FMSR),
weight decay (WD), flat minimum search (FMS).
All nets start out with 8 hidden units.
Each value is a mean of 10 trials.
Column ``MSE'' shows mean squared error.
Column ``w'' shows the number of pruned weights,
column ``u'' shows the number of pruned units,
the final 3 rows (``max'', ``min'', ``mean'')
list maximal, minimal and mean performance (see text)
over 10 trials
(note again that MSE is an irrelevant performance measure for this task).
Flat minimum search outperforms all other methods.
Method |
train |
test |
removed |
performance |
|
MSE |
MSE |
w |
u |
max |
min |
mean |
BP |
0.181 |
0.535 |
|
|
57.33 |
20.69 |
41.61 |
OBS |
0.219 |
0.502 |
15 |
1 |
50.78 |
32.20 |
40.43 |
WDR |
0.180 |
0.538 |
0 |
0 |
62.54 |
13.64 |
41.17 |
FMSR |
0.180 |
0.542 |
0 |
0 |
64.07 |
24.58 |
41.57 |
WD |
0.235 |
0.452 |
17 |
3 |
54.04 |
32.03 |
40.75 |
FMS |
0.240 |
0.472 |
19 |
3 |
54.11 |
31.12 |
44.40 |
|
Parameters:
Learning rate: 0.01.
Architecture: (5-8-1).
Number of training examples: 20,000,000.
Method specific parameters:
FMS:
;
; if
then
is set to 0.001.
WD: like with FMS, but .
FMSR: like with FMS, but
;
number of retraining examples: 5,000,000.
WDR: like with FMSR, but .
OBS:
.
See section 5.6 for parameters common to all experiments.
Next: EXPERIMENT 5 - stock
Up: EXPERIMENTAL RESULTS
Previous: EXPERIMENT 3 - stock
Juergen Schmidhuber
2003-02-13
Back to Financial Forecasting page