next up previous
Next: About this document ... Up: Exploring the Predictable In Previous: Basic Cycle Modification

Bibliography

1
E. B. Baum and I. Durdanovic.
Toward a model of mind as an economy of agents.
Machine Learning, 35(2):155-185, 1999.

2
G.J. Chaitin.
On the length of programs for computing finite binary sequences: statistical considerations.
Journal of the ACM, 16:145-159, 1969.

3
A. C. Clarke.
The ghost from the grand banks.
Orbit books, 1991.

4
D. A. Cohn.
Neural network exploration using optimal experiment design.
In J. Cowan, G. Tesauro, and J. Alspector, editors, Advances in Neural Information Processing Systems 6, pages 679-686. Morgan Kaufmann, 1994.

5
V. V. Fedorov.
Theory of optimal experiments.
Academic Press, 1972.

6
S. Geman, E. Bienenstock, and R. Doursat.
Neural networks and the bias/variance dilemma.
Neural Computation, 4:1-58, 1992.

7
D. Hillis.
Co-evolving parasites improve simulated evolution as an optimization procedure.
In C. G. Langton, C. Taylor, J. D. Farmer, and S. Rasmussen, editors, Artificial Life II, pages 313-324. Addison Wesley, 1992.

8
S. Hochreiter and J. Schmidhuber.
Flat minima.
Neural Computation, 9(1):1-42, 1997.

9
J. H. Holland.
Properties of the bucket brigade.
In Proceedings of an International Conference on Genetic Algorithms. Lawrence Erlbaum, Hillsdale, NJ, 1985.

10
J. Hwang, J. Choi, S. Oh, and R. J. Marks II.
Query-based learning applied to partially trained multilayer perceptrons.
IEEE Transactions on Neural Networks, 2(1):131-136, 1991.

11
A.N. Kolmogorov.
Three approaches to the quantitative definition of information.
Problems of Information Transmission, 1:1-11, 1965.

12
I. Kwee, M. Hutter, and J. Schmidhuber.
Market-based reinforcement learning in partially observable worlds.
Proceedings of the International Conference on Artificial Neural Networks (ICANN-2001), in press, (IDSIA-10-01, cs.AI/0105025), 2001.

13
D. Lenat.
Theory formation by heuristic search.
Machine Learning, 21, 1983.

14
M. Li and P. M. B. Vitányi.
An Introduction to Kolmogorov Complexity and its Applications (2nd edition).
Springer, 1997.

15
L.J. Lin.
Reinforcement Learning for Robots Using Neural Networks.
PhD thesis, Carnegie Mellon University, Pittsburgh, January 1993.

16
D. J. C. MacKay.
Information-based objective functions for active data selection.
Neural Computation, 4(2):550-604, 1992.

17
F. Nake.
Ästhetik als Informationsverarbeitung.
Springer, 1974.

18
M. Plutowski, G. Cottrell, and H. White.
Learning Mackey-Glass from 25 examples, plus or minus 2.
In J. Cowan, G. Tesauro, and J. Alspector, editors, Advances in Neural Information Processing Systems 6, pages 1135-1142. Morgan Kaufmann, 1994.

19
J. B. Pollack and A. D. Blair.
Why did TD-Gammon work?
In M. C. Mozer, M. I. Jordan, and S. Petsche, editors, Advances in Neural Information Processing Systems 9, pages 10-16. MIT Press, 1997.

20
A. L. Samuel.
Some studies in machine learning using the game of checkers.
IBM Journal on Research and Development, 3:210-229, 1959.

21
J. Schmidhuber.
Evolutionary principles in self-referential learning. Diploma thesis, Institut für Informatik, Technische Universität München, 1987.

22
J. Schmidhuber.
A local learning algorithm for dynamic feedforward and recurrent networks.
Connection Science, 1(4):403-412, 1989.

23
J. Schmidhuber.
Curious model-building control systems.
In Proceedings of the International Joint Conference on Neural Networks, Singapore, volume 2, pages 1458-1463. IEEE press, 1991.

24
J. Schmidhuber.
A possibility for implementing curiosity and boredom in model-building neural controllers.
In J. A. Meyer and S. W. Wilson, editors, Proc. of the International Conference on Simulation of Adaptive Behavior: From Animals to Animats, pages 222-227. MIT Press/Bradford Books, 1991.

25
J. Schmidhuber.
Learning factorial codes by predictability minimization.
Neural Computation, 4(6):863-879, 1992.

26
J. Schmidhuber.
Discovering neural nets with low Kolmogorov complexity and high generalization capability.
Neural Networks, 10(5):857-873, 1997.

27
J. Schmidhuber.
Low-complexity art.
Leonardo, Journal of the International Society for the Arts, Sciences, and Technology, 30(2):97-103, 1997.

28
J. Schmidhuber.
What's interesting?
Technical Report IDSIA-35-97, IDSIA, 1997.
ftp://ftp.idsia.ch/pub/juergen/interest.ps.gz; extended abstract in Proc. Snowbird'98, Utah, 1998.

29
J. Schmidhuber.
Facial beauty and fractal geometry, 1998.
Published in the Cogprint Archive: http://cogprints.soton.ac.uk.

30
J. Schmidhuber.
Artificial curiosity based on discovering novel algorithmic predictability through coevolution.
In P. Angeline, Z. Michalewicz, M. Schoenauer, X. Yao, and Z. Zalzala, editors, Congress on Evolutionary Computation, pages 1612-1618. IEEE Press, 1999.

31
J. Schmidhuber.
A general method for incremental self-improvement and multi-agent learning.
In X. Yao, editor, Evolutionary Computation: Theory and Applications, pages 81-123. World Scientific, 1999.

32
J. Schmidhuber, M. Eldracher, and B. Foltin.
Semilinear predictability minimization produces well-known feature detectors.
Neural Computation, 8(4):773-786, 1996.

33
J. Schmidhuber and D. Prelinger.
Discovering predictable classifications.
Neural Computation, 5(4):625-635, 1993.

34
J. Schmidhuber, J. Zhao, and N. Schraudolph.
Reinforcement learning with self-modifying policies.
In S. Thrun and L. Pratt, editors, Learning to learn, pages 293-309. Kluwer, 1997.

35
J. Schmidhuber, J. Zhao, and M. Wiering.
Shifting inductive bias with success-story algorithm, adaptive Levin search, and incremental self-improvement.
Machine Learning, 28:105-130, 1997.

36
N. Schraudolph and T. J. Sejnowski.
Unsupervised discrimination of clustered data via optimization of binary information gain.
In Stephen José Hanson, Jack D. Cowan, and C. Lee Giles, editors, Advances in Neural Information Processing Systems, volume 5, pages 499-506. Morgan Kaufmann, San Mateo, 1993.

37
Nicol N. Schraudolph, Martin Eldracher, and Jürgen Schmidhuber.
Processing images by semi-linear predictability minimization.
Network: Computation in Neural Systems, 10(2):133-169, 1999.

38
C. E. Shannon.
A mathematical theory of communication (parts I and II).
Bell System Technical Journal, XXVII:379-423, 1948.

39
R.J. Solomonoff.
A formal theory of inductive inference. Part I.
Information and Control, 7:1-22, 1964.

40
J. Storck, S. Hochreiter, and J. Schmidhuber.
Reinforcement driven information acquisition in non-deterministic environments.
In Proceedings of the International Conference on Artificial Neural Networks, Paris, volume 2, pages 159-164. EC2 & Cie, 1995.

41
G. Tesauro.
TD-gammon, a self-teaching backgammon program, achieves master-level play.
Neural Computation, 6(2):215-219, 1994.

42
G. Weiss.
Hierarchical chunking in classifier systems.
In Proceedings of the 12th National Conference on Artificial Intelligence, volume 2, pages 1335-1340. AAAI Press/The MIT Press, 1994.

43
G. Weiss and S. Sen, editors.
Adaption and Learning in Multi-Agent Systems.
LNAI 1042, Springer, 1996.

44
S.W. Wilson.
ZCS: A zeroth level classifier system.
Evolutionary Computation, 2:1-18, 1994.

45
S.W. Wilson.
Classifier fitness based on accuracy.
Evolutionary Computation, 3(2):149-175, 1995.

46
D. H. Wolpert, K. Tumer, and J. Frank.
Using collective intelligence to route internet traffic.
In M. Kearns, S. A. Solla, and D. Cohn, editors, Advances in Neural Information Processing Systems 12. MIT Press, 1999.



Juergen Schmidhuber 2003-03-10


Back to Active Learning - Exploration - Curiosity page