next up previous
Next: RELATION TO PREVIOUS WORK Up: EXPERIMENTAL RESULTS Previous: EXPERIMENT 5 - stock


DETAILS / PARAMETERS

With exception of the experiment in section 5.2, all units are sigmoid in the range of $[-1.0,1.0]$. Weights are constrained to $[-30,30]$ and initialized in [-0.1,0.1]. The latter ensures high first order derivatives in the beginning of the learning phase. WD is set up to hardly punish weights below $w_0 = 0.2$. $E_{\mbox{{\scriptsize average}}}$ is the average error on the training set, approximated using exponential decay: $E_{\mbox{{\scriptsize average}}} \leftarrow
\gamma E_{\mbox{{\scriptsize average}}} + (1-\gamma) E(net(w),D_0)$, where $\gamma = 0.85$.

FMS details. To control $B(w,D_0)$'s influence during learning, its gradient is normalized and multiplied by the length of $E(net(w),D_0)$'s gradient (same for weight decay, see below). $\lambda$ is computed like in (Weigend et al., 1991) and initialized with 0. Absolute values of first order derivatives are replaced by $10^{-20}$ if below this value. We ought to judge a weight $w_{ij}$ as being pruned if $\delta w_{ij}$ (see equation (5) in section 4) exceeds the length of the weight range. However, the unknown scaling factor $\epsilon$ (see inequality (3) and equation (5) in section 4) is required to compute $\delta w_{ij}$. Therefore, we judge a weight $w_{ij}$ as being pruned if, with arbitrary $\epsilon$, $\delta w_{ij}$ is much bigger than the corresponding $\delta$'s of the other weights (typically, there are clearly separable classes of weights with high and low $\delta$'s, which differ from each other by a factor ranging from $10^2$ to $10^5$).

If all weights to and from a particular unit are very close to zero, the unit is lost: due to tiny derivatives, the weights will never again increase significantly. Sometimes, it is necessary to bring lost units back into the game. For this purpose, every $n_{init}$ time steps (typically, $n_{init} =$ 500,000), all weights $w_{ij}$ with $0 \leq w_{ij}<0.01$ are randomly re-initialized in $[0.005,0.01]$; all weights $w_{ij}$ with $0 \geq w_{ij}>-0.01$ are randomly initialized in $[-0.01,-0.005]$, and $\lambda$ is set to 0.

Weight decay details. We used Weigend et al.'s weight decay term: $D(w) = \sum_{i,j} \frac{w_{ij}^2/w_0}{1 + w_{ij}^2/w_0}$. Like with FMS, $D(w,w_0)$'s gradient was normalized and multiplied by the length of $E(net(w),D_0)$'s gradient. $\lambda$ was adjusted like with FMS. Lost units were brought back like with FMS.

Modifications of OBS. Typically, most weights exceed 1.0 after training. Therefore, higher order terms of $\delta w$ in the Taylor expansion of the error function do not vanish. Hence, OBS is not fully theoretically justified. Still, we used OBS to delete high weights, assuming that higher order derivatives are small if second order derivatives are. To obtain reasonable performance, we modified the original OBS procedure (notation following Hassibi and Stork, 1993):


next up previous
Next: RELATION TO PREVIOUS WORK Up: EXPERIMENTAL RESULTS Previous: EXPERIMENT 5 - stock
Juergen Schmidhuber 2003-02-13


Back to Financial Forecasting page