J. H. Schmidhuber,
Learning unambiguous reduced sequence descriptions,
in J. E. Moody, S. J. Hanson, and R. P. Lippman, editors, Advances in Neural Information Processing Systems 4,
San Mateo, CA: Morgan Kaufmann, 1992, pp. 291-298.
D. E. Rumelhart, G. E. Hinton, and R. J. Williams,
Learning internal representations by error propagation,
Parallel Distributed Processing, volume 1,
MIT Press, 1986, pp. 318-362.
J. H. Schmidhuber, M. C. Mozer, and D. Prelinger,
Continuous history compression,
in H. Hüning, S. Neuhauser, M. Raus, and W. Ritschel, editors,
Proc. of Intl. Workshop on Neural Networks, RWTH Aachen,
Augustinus, 1993, pp. 87-95.
S. Lindstädt,
Comparison of two unsupervised neural network models for redundancy
reduction,
In M. C. Mozer, P. Smolensky, D. S. Touretzky, J. L. Elman, and A. S.
Weigend, editors, Proc. of the 1993 Connectionist Models Summer School,
Hillsdale, NJ: Erlbaum Associates, 1993, pp. 308-315.
Table 1:
Average compression ratios (and corresponding variances)
of various compression algorithms tested on short
German text files ( Bytes) from the unknown test set
from Münchner Merkur.
Method
Av. compression ratio
Variance
Huffman Coding (UNIX: pack)
1.74
0.0002
Lempel-Ziv Coding (UNIX: compress)
1.99
0.0014
METHOD 3,
2.20
0.0014
Improved Lempel-Ziv ( UNIX: gzip -9)
2.29
0.0033
METHOD 1,
2.70
0.0158
METHOD 2,
2.72
0.0234
Table 2:
Average compression ratios and variances
for the Frankenpost.
The neural predictor was not retrained.