next up previous contents
Nächste Seite: Über dieses Dokument ... Aufwärts: NETZWERKARCHITEKTUREN, ZIELFUNKTIONEN UND KETTENREGEL Vorherige Seite: SCHLUSSBEMERKUNGEN ZUR ARBEIT   Inhalt

Literatur

1
A.M.H.J. Aertsen, G.L. Gerstein, M.K. Habib, and G. Palm.
Dynamics of neuronal firing correlation: Modulation of ``effective connectivity''.
Journal of Neurophysiology, 61:900-917, 1989.

2
L. B. Almeida.
A learning rule for asynchronous perceptrons with feedback in a combinatorial environment.
In IEEE 1st International Conference on Neural Networks, San Diego, volume 2, pages 609-618, 1987.

3
J. Bachrach.
Learning to represent state, 1988.
Unpublished master's thesis, University of Massachusetts, Amherst.

4
H. B. Barlow.
Unsupervised learning.
Neural Computation, 1(3):295-311, 1989.

5
H. B. Barlow, T. P. Kaushal, and G. J. Mitchison.
Finding minimum entropy codes.
Neural Computation, 1(3):412-423, 1989.

6
A. G. Barto.
Connectionist approaches for control.
Technical Report COINS Technical Report 89-89, University of Massachusetts, Amherst MA 01003, 1989.

7
A. G. Barto and P. Anandan.
Pattern recognizing stochastic learning automata.
IEEE Transactions on Systems, Man, and Cybernetics, 15:360-375, 1985.

8
A. G. Barto and M. I. Jordan.
Gradient following without back propagation in layered networks.
In IEEE 1st International Conference on Neural Networks, San Diego, volume 2, pages 629-636, 1987.

9
A. G. Barto, R. S. Sutton, and C. W. Anderson.
Neuronlike adaptive elements that can solve difficult learning control problems.
IEEE Transactions on Systems, Man, and Cybernetics, SMC-13:834-846, 1983.

10
E. B. Baum and D. Haussler.
What size net gives valid generalization?
Neural Computation, 1(1):151-160, 1989.

11
S. Becker and G. E. Hinton.
Spatial coherence as an internal teacher for a neural network.
Technical Report CRG-TR-89-7, Department of Computer Science, University of Toronto, Ontario, 1989.

12
K. Bergner.
Diploma thesis, 1991.
Institut für Informatik, Technische Universität München.

13
G. L. Bilbro and D. E. Van den Bout.
Maximum entropy and learning theory.
Neural Computation, 4(6):839-852, 1992.

14
U. Bodenhausen and A. Waibel.
The tempo 2 algorithm: Adjusting time-delays by supervised learning.
In D. S. Lippman, J. E. Moody, and D. S. Touretzky, editors, Advances in Neural Information Processing Systems 3, pages 155-161. San Mateo, CA: Morgan Kaufmann, 1991.

15
H. Bourlard, N. Morgan, and C. Wooters.
Connectionist approaches to the use of Markov models for speech recognition.
In D. S. Lippman, J. E. Moody, and D. S. Touretzky, editors, Advances in Neural Information Processing Systems 3, pages 213-219. San Mateo, CA: Morgan Kaufmann, 1991.

16
J. S. Bridle and D. J. C. MacKay.
Unsupervised classifiers, mutual information and `phantom' targets.
In D. S. Lippman, J. E. Moody, and D. S. Touretzky, editors, Advances in Neural Information Processing Systems 4,. San Mateo, CA: Morgan Kaufmann, 1992.

17
G.J. Chaitin.
A theory of program size formally identical to information theory.
Journal of the ACM, 22:329-340, 1965.

18
Y. Chauvin.
Generalization performance of overtrained networks.
In L. B. Almeida and C. J. Wellekens, editors, Proc. of the EURASIP'90 Workshop, Portugal, page 46. Springer, 1990.

19
P. Cheeseman, M. Self, J. Kelly, J. Stutz, W. Taylor, and D. Freeman.
AutoClass: a Bayesian classification system.
In Machine Learning: Proceedings of the Fifth International Workshop. Morgan Kaufmann, San Mateo, CA, 1988.

20
S. Das, C.L. Giles, and G.Z. Sun.
Learning context-free grammars: Capabilities and limitations of a neural network with an external stack memory.
In Proceedings of the The Fourteenth Annual Conference of the Cognitive Science Society, Bloomington, 1992.

21
P. Dayan and G. Hinton.
Feudal reinforcement learning.
In D. S. Lippman, J. E. Moody, and D. S. Touretzky, editors, Advances in Neural Information Processing Systems 5,. San Mateo, CA: Morgan Kaufmann, 1993.
In preparation.

22
M. Eldracher and B. Baginski.
Neural subgoal generation using backpropagation.
In Submitted to WCNN'93, July 1993.

23
J. L. Elman.
Finding structure in time.
Technical Report CRL Technical Report 8801, Center for Research in Language, University of California, San Diego, 1988.

24
S. E. Fahlman.
An empirical study of learning speed in back-propagation networks.
Technical Report CMU-CS-88-162, Carnegie-Mellon Univ., 1988.

25
R. Farber, A. Lapedes, and K. Sirotkin.
Determination of eukaryotic protein coding regions using neural networks and information theory.
Journal of Molecular Biology, 226:471-479, 1992.

26
W. Finnoff.
Diffusion approximations for the constant learning rate backpropagation algorithm and resistance to local minima.
In D. S. Lippman, J. E. Moody, and D. S. Touretzky, editors, Advances in Neural Information Processing Systems 5,. San Mateo, CA: Morgan Kaufmann, 1993.
In preparation.

27
P. Földiák.
Forming sparse representations by local anti-hebbian learning.
Biological Cybernetics, 64:165-170, 1990.

28
M. Gherrity.
A learning algorithm for analog fully recurrent neural networks.
In IEEE/INNS International Joint Conference on Neural Networks, San Diego, volume 1, pages 643-644, 1989.

29
C. L. Giles, C. B. Miller, D. Chen, H. H. Chen, G. Z. Sun, and Y. C. Lee.
Learning and extracting finite state automata with second-order recurrent neural networks.
Neural Computation, 4:393-405, 1992.

30
B. Glavina.
Planung kollisionsfreier Bewegungen für Manipulatoren durch Kombination von zielgerichteter Suche und zufallsgesteuerter Zwischenzielerzeugung. Dissertation, 1991.

31
S. Grossberg.
Adaptive pattern classification and universal recoding, 1: Parallel development and coding of neural feature detectors.
Biological Cybernetics, 23:187-202, 1976.

32
D. Haussler, M. Kearns, M. Opper, and R. Schapire.
Estimating average-case learning curves using Bayesian, statistical physics and VC dimension methods.
In D. S. Lippman, J. E. Moody, and D. S. Touretzky, editors, Advances in Neural Information Processing Systems 4,, pages 855-862. San Mateo, CA: Morgan Kaufmann, 1992.

33
D. O. Hebb.
The Organization of Behavior.
Wiley, New York, 1949.

34
G. Held.
Data Compression.
Wiley and Sons LTD, New York, 1991.

35
G. E. Hinton and S. Becker.
An unsupervised learning procedure that discovers surfaces in random-dot stereograms.
In Proc. IEEE/INNS International Joint Conference on Neural Networks, volume 1, pages 218-222. Hillsdale, NJ. Erlbaum, 1990.

36
G. E. Hinton and T. E. Sejnowski.
Learning and relearning in Boltzmann machines.
In Parallel Distributed Processing, volume 1, pages 282-317. MIT Press, 1986.

37
J. Hochreiter.
Diploma thesis, 1991.
Institut für Informatik, Technische Universität München.

38
J. J. Hopfield.
Neural networks and physical systems with emergent collective computational abilities.
Proc. of the National Academy of Sciences, 79:2554-2558, 1982.

39
C. Ji and D. Psaltis.
The VC dimension versus the statistical capacity of multilayer networks.
In D. S. Lippman, J. E. Moody, and D. S. Touretzky, editors, Advances in Neural Information Processing Systems 4,, pages 928-935. San Mateo, CA: Morgan Kaufmann, 1992.

40
M. I. Jordan.
Serial order: A parallel distributed processing approach.
Technical Report ICS Report 8604, Institute for Cognitive Science, University of California, San Diego, 1986.

41
M. I. Jordan and D. E. Rumelhart.
Supervised learning with a distal teacher.
Technical Report Occasional Paper #40, Center for Cog. Sci., Massachusetts Institute of Technology, 1990.

42
T. Kohonen.
Self-Organization and Associative Memory.
Springer, second edition, 1988.

43
A.N. Kolmogorov.
Three approaches to the quantitative definition of information.
Problems of Information Transmission, 1:1-11, 1965.

44
S. Kullback.
Statistics and Information Theory.
J. Wiley and Sons, New York, 1959.

45
A. Lapedes and R. Faber.
How neural nets work.
In D. Z. Anderson, editor, `Neural Information Processing Systems: Natural and Synthetic' (NIPS). NY, American Institute of Physics, 1987.

46
Y. LeCun.
Une procédure d'apprentissage pour réseau à seuil asymétrique.
Proceedings of Cognitiva 85, Paris, pages 599-604, 1985.

47
Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel.
Back-propagation applied to handwritten zip code recognition.
Neural Computation, 1(4):541-551, 1989.

48
D. Lenat.
Theory formation by heuristic search.
Machine Learning, 21, 1983.

49
R. Linsker.
Self-organization in a perceptual network.
IEEE Computer, 21:105-117, 1988.

50
R. Linsker.
How to generate ordered maps by maximizing the mutual information between input and output.
Neural Computation, 1(3):402-411, 1989.

51
G. Lukes.
Review of Schmidhuber's paper `Recurrent networks adjusted by adaptive critics'.
Neural Network Reviews, 4(1):41-42, 1990.

52
Y. Lyuu and I. Rivin.
Tight bounds on transition to perfect generalization.
Neural Computation, 4(6):854-862, 1992.

53
D. J. C. MacKay.
Bayesian interpolation.
Neural Computation, 4:415-447, 1992.

54
D. J. C. MacKay.
A practical Bayesian framework for backprop networks.
Neural Computation, 4:448-472, 1992.

55
M. Minsky.
Steps toward artificial intelligence.
In E. Feigenbaum and J. Feldman, editors, Computers and Thought, pages 406-450. McGraw-Hill, New York, 1963.

56
M. Minsky and S. Papert.
Perceptrons.
Cambridge, MA: MIT Press, 1969.

57
K. Möller and S. Thrun.
Task modularization by network modulation.
In J. Rault, editor, Proceedings of Neuro-Nimes '90, pages 419-432, November 1990.

58
J. E. Moody.
The effective number of parameters: An analysis of generalization and regularization in nonlinear learning systems.
In D. S. Lippman, J. E. Moody, and D. S. Touretzky, editors, Advances in Neural Information Processing Systems 4,, pages 847-854. San Mateo, CA: Morgan Kaufmann, 1992.

59
M. C. Mozer.
A focused back-propagation algorithm for temporal sequence recognition.
Complex Systems, 3:349-381, 1989.

60
M. C. Mozer.
Connectionist music composition based on melodic, stylistic, and psychophysical constraints.
Technical Report CU-CS-495-90, University of Colorado at Boulder, 1990.

61
M. C. Mozer.
Induction of multiscale temporal structure.
In D. S. Lippman, J. E. Moody, and D. S. Touretzky, editors, Advances in Neural Information Processing Systems 4,, pages 275-282. San Mateo, CA: Morgan Kaufmann, 1992.

62
C. Myers.
Learning with delayed reinforcement through attention-driven buffering.
Technical report, Neural Systems Engineering Group, Dept. of Electrical Engineering, Imperial College of Science, Technology and Medicine, 1990.

63
Nguyen and B. Widrow.
The truck backer-upper: An example of self learning in neural networks.
In IEEE/INNS International Joint Conference on Neural Networks, Washington, D.C., volume 1, pages 357-364, 1989.

64
S. J. Nowlan.
Auto-encoding with entropy constraints.
In Proceedings of INNS First Annual Meeting, Boston, MA., 1988.
Also published in special supplement to Neural Networks.

65
E. Oja.
Neural networks, principal components, and subspaces.
International Journal of Neural Systems, 1(1):61-68, 1989.

66
D. B. Parker.
Learning-logic.
Technical Report TR-47, Center for Comp. Research in Economics and Management Sci., MIT, 1985.

67
D. B. Parker.
Optimal algorithms for adaptive networks: Second order back propagation, second order direct propagation, and second order hebbian learning.
In IEEE 1st International Conference on Neural Networks, San Diego, volume 2, pages 593-600, 1987.

68
B. A. Pearlmutter.
Learning state space trajectories in recurrent neural networks.
Neural Computation, 1(2):263-269, 1989.

69
B. A. Pearlmutter and G. E. Hinton.
G-maximization: An unsupervised learning procedure for discovering regularities.
In J. S. Denker, editor, Neural Networks for Computing: American Institute of Physics Conference Proceedings 151, volume 2, pages 333-338, 1986.

70
F. J. Pineda.
Dynamics and architecture for neural computation.
Journal of Complexity, 4:216-245, 1988.

71
F. J. Pineda.
Recurrent backpropagation and the dynamical approach to adaptive neural computation.
Neural Computation, 1(2):161-172, 1989.

72
F. J. Pineda.
Time dependent adaptive neural networks.
In D. S. Touretzky, editor, Advances in Neural Information Processing Systems 2, pages 710-718. San Mateo, CA: Morgan Kaufmann, 1990.

73
M. D. Plumbley.
On information theory and unsupervised neural networks. Dissertation, published as technical report CUED/F-INFENG/TR.78, Engineering Department, Cambridge University, 1991.

74
J. B. Pollack.
Recursive distributed representation.
Artificial Intelligence, 46:77-105, 1990.

75
D. Prelinger.
Diploma thesis, 1992.
Institut für Informatik, Technische Universität München.

76
M. B. Ring.
PhD Proposal: Autonomous construction of sensorimotor hierarchies in neural networks.
Technical report, University of Texas at Austin, 1990.

77
M. B. Ring.
Incremental development of complex behaviors through automatic construction of sensory-motor hierarchies.
In L. Birnbaum and G. Collins, editors, Machine Learning: Proceedings of the Eighth International Workshop, pages 343-347. Morgan Kaufmann, 1991.

78
A. J. Robinson.
Dynamic Error Propagation Networks.
PhD thesis, Trinity Hall and Cambridge University Engineering Department, 1989.

79
A. J. Robinson and F. Fallside.
The utility driven dynamic error propagation network.
Technical Report CUED/F-INFENG/TR.1, Cambridge University Engineering Department, 1987.

80
R. Rohwer.
The `moving targets' training method.
In J. Kindermann and A. Linden, editors, Proceedings of `Distributed Adaptive Neural Information Processing', St.Augustin, 24.-25.5,. Oldenbourg, 1989.

81
R. Rohwer and B. Forrest.
Training time-dependence in neural networks.
In IEEE 1st International Conference on Neural Networks, San Diego, volume 2, pages 701-708, 1987.

82
M. Röscheisen, R. Hofmann, and V. Tresp.
Neural control for rolling mills: Incorporating domain theories to overcome data deficiencies.
In D. S. Lippman, J. E. Moody, and D. S. Touretzky, editors, Advances in Neural Information Processing Systems 4,, pages 659-666. San Mateo, CA: Morgan Kaufmann, 1992.

83
F. Rosenblatt.
Principles of Neurodynamics.
Spartan, New York, 1962.

84
J. Rubner and K. Schulten.
Development of feature detectors by self-organization: A network model.
Biological Cybernetics, 62:193-199, 1990.

85
D. E. Rumelhart, G. E. Hinton, and R. J. Williams.
Learning internal representations by error propagation.
In Parallel Distributed Processing, volume 1, pages 318-362. MIT Press, 1986.

86
D. E. Rumelhart and D. Zipser.
Feature discovery by competitive learning.
In Parallel Distributed Processing, pages 151-193. MIT Press, 1986.

87
T. D. Sanger.
An optimality principle for unsupervised learning.
In D. S. Touretzky, editor, Advances in Neural Information Processing Systems 1, pages 11-19. San Mateo, CA: Morgan Kaufmann, 1989.

88
J. H. Schmidhuber.
Evolutionary principles in self-referential learning, or on learning how to learn: The meta-meta-... hook, 1987.
Institut für Informatik, Technische Universität München.

89
J. H. Schmidhuber.
Accelerated learning in back-propagation nets.
In R. Pfeifer, Z. Schreter, Z. Fogelman, and L. Steels, editors, Connectionism in Perspective, pages 429 - 438. Amsterdam: Elsevier, North-Holland, 1989.

90
J. H. Schmidhuber.
A local learning algorithm for dynamic feedforward and recurrent networks.
Connection Science, 1(4):403-412, 1989.

91
J. H. Schmidhuber.
The neural bucket brigade.
In R. Pfeifer, Z. Schreter, Z. Fogelman, and L. Steels, editors, Connectionism in Perspective, pages 439-446. Amsterdam: Elsevier, North-Holland, 1989.

92
J. H. Schmidhuber.
Additional remarks on G. Lukes' review of Schmidhuber's paper `Recurrent networks adjusted by adaptive critics'.
Neural Network Reviews, 4(1):43, 1990.

93
J. H. Schmidhuber.
Dynamische neuronale Netze und das fundamentale raumzeitliche Lernproblem. Dissertation, Institut für Informatik, Technische Universität München, 1990.

94
J. H. Schmidhuber.
Learning algorithms for networks with internal and external feedback.
In D. S. Touretzky, J. L. Elman, T. J. Sejnowski, and G. E. Hinton, editors, Proc. of the 1990 Connectionist Models Summer School, pages 52-61. San Mateo, CA: Morgan Kaufmann, 1990.

95
J. H. Schmidhuber.
Networks adjusting networks.
In J. Kindermann and A. Linden, editors, Proceedings of `Distributed Adaptive Neural Information Processing', St.Augustin, 24.-25.5. 1989, pages 197-208. Oldenbourg, 1990.
In November 1990 a revised and extended version appeared as FKI-Report FKI-125-90 (revised) at the Institut für Informatik, Technische Universität München.

96
J. H. Schmidhuber.
An on-line algorithm for dynamic reinforcement learning and planning in reactive environments.
In Proc. IEEE/INNS International Joint Conference on Neural Networks, San Diego, volume 2, pages 253-258, 1990.

97
J. H. Schmidhuber.
Recurrent networks adjusted by adaptive critics.
In Proc. IEEE/INNS International Joint Conference on Neural Networks, Washington, D. C., volume 1, pages 719-722, 1990.

98
J. H. Schmidhuber.
Reinforcement learning with interacting continually running fully recurrent networks.
In Proc. INNC International Neural Network Conference, Paris, volume 2, pages 817-820, 1990.

99
J. H. Schmidhuber.
Reinforcement-Lernen und adaptive Steuerung.
Nachrichten Neuronale Netze, 2:1-3, 1990.

100
J. H. Schmidhuber.
Temporal-difference-driven learning in recurrent networks.
In R. Eckmiller, G. Hartmann, and G. Hauske, editors, Parallel Processing in Neural Systems and Computers, pages 209-212. North-Holland, 1990.

101
J. H. Schmidhuber.
Adaptive decomposition of time.
In T. Kohonen, K. Mäkisara, O. Simula, and J. Kangas, editors, Artificial Neural Networks, pages 909-914. Elsevier Science Publishers B.V., North-Holland, 1991.

102
J. H. Schmidhuber.
Adaptive history compression for learning to divide and conquer.
In Proc. International Joint Conference on Neural Networks, Singapore, volume 2, pages 1130-1135. IEEE, 1991.

103
J. H. Schmidhuber.
Curious model-building control systems.
In Proc. International Joint Conference on Neural Networks, Singapore, volume 2, pages 1458-1463. IEEE, 1991.

104
J. H. Schmidhuber.
Learning factorial codes by predictability minimization.
Technical Report CU-CS-565-91, Dept. of Comp. Sci., University of Colorado at Boulder, December 1991.

105
J. H. Schmidhuber.
Learning temporary variable binding with dynamic links.
In Proc. International Joint Conference on Neural Networks, Singapore, volume 3, pages 2075-2079. IEEE, 1991.

106
J. H. Schmidhuber.
Learning to generate sub-goals for action sequences.
In T. Kohonen, K. Mäkisara, O. Simula, and J. Kangas, editors, Artificial Neural Networks, pages 967-972. Elsevier Science Publishers B.V., North-Holland, 1991.

107
J. H. Schmidhuber.
A possibility for implementing curiosity and boredom in model-building neural controllers.
In J. A. Meyer and S. W. Wilson, editors, Proc. of the International Conference on Simulation of Adaptive Behavior: From Animals to Animats, pages 222-227. MIT Press/Bradford Books, 1991.

108
J. H. Schmidhuber.
Reinforcement learning in markovian and non-markovian environments.
In D. S. Lippman, J. E. Moody, and D. S. Touretzky, editors, Advances in Neural Information Processing Systems 3, pages 500-506. San Mateo, CA: Morgan Kaufmann, 1991.

109
J. H. Schmidhuber.
A fixed size storage $O(n^3)$ time complexity learning algorithm for fully recurrent continually running networks.
Neural Computation, 4(2):243-248, 1992.

110
J. H. Schmidhuber.
Learning complex, extended sequences using the principle of history compression.
Neural Computation, 4(2):234-242, 1992.

111
J. H. Schmidhuber.
Learning factorial codes by predictability minimization.
Neural Computation, 4(6):863-879, 1992.

112
J. H. Schmidhuber.
Learning to control fast-weight memories: An alternative to recurrent nets.
Neural Computation, 4(1):131-139, 1992.

113
J. H. Schmidhuber.
Learning unambiguous reduced sequence descriptions.
In J. E. Moody, S. J. Hanson, and R. P. Lippman, editors, Advances in Neural Information Processing Systems 4, pages 291-298. San Mateo, CA: Morgan Kaufmann, 1992.

114
J. H. Schmidhuber.
Steps towards `self-referential' learning.
Technical Report CU-CS-627-92, Dept. of Comp. Sci., University of Colorado at Boulder, November 1992.

115
J. H. Schmidhuber.
A neural network that embeds its own meta-levels.
In Proc. of the International Conference on Neural Networks '93, San Francisco. IEEE, 1993.
Accepted for publication.

116
J. H. Schmidhuber.
On decreasing the ratio between learning complexity and number of time varying variables in fully recurrent nets.
Technical report, Institut für Informatik, Technische Universität München, 1993.
In preparation.

117
J. H. Schmidhuber and R. Huber.
Learning to generate artificial fovea trajectories for target detection.
International Journal of Neural Systems, 2(1 & 2):135-141, 1991.

118
J. H. Schmidhuber and R. Huber.
Using sequential adaptive neuro-control for efficient learning of rotation and translation invariance.
In T. Kohonen, K. Mäkisara, O. Simula, and J. Kangas, editors, Artificial Neural Networks, pages 315-320. Elsevier Science Publishers B.V., North-Holland, 1991.

119
J. H. Schmidhuber, M. C. Mozer, and D. Prelinger.
Continuous history compression.
Technical report, Dept. of Comp. Sci., University of Colorado at Boulder, 1993.
In preparation.

120
J. H. Schmidhuber and D. Prelinger.
Discovering predictable classifications.
Technical Report CU-CS-626-92, Dept. of Comp. Sci., University of Colorado at Boulder, November 1992.

121
J. H. Schmidhuber and D. Prelinger.
Discovering predictable classifications.
Accepted by Neural Computation, 1993.

122
J. H. Schmidhuber and R. Wahnsiedler.
Planning simple trajectories using neural subgoal generators.
In J. A. Meyer, H. Roitblat, and S. Wilson, editors, Proc. of the 2nd International Conference on Simulation of Adaptive Behavior. MIT Press, 1992.
In press.

123
B. Schürmann.
Stability and adaptation in artificial neural systems.
Physical Review, A 40(50):2681-2688, 1989.

124
C. E. Shannon.
A mathematical theory of communication (part III).
Bell System Technical Journal, XXVII:623-656, 1948.

125
C. E. Shannon.
A mathematical theory of communication (parts I and II).
Bell System Technical Journal, XXVII:379-423, 1948.

126
F. M. Silva and L. B. Almeida.
A distributed decorrelation algorithm.
In Erol Gelenbe, editor, Neural Networks, Advances and Applications. North-Holland, 1991.

127
P.Y. Simard and Y. LeCun.
Local computation of the second derivative information in a multi-layer network.
In D. S. Lippman, J. E. Moody, and D. S. Touretzky, editors, Advances in Neural Information Processing Systems 5,. San Mateo, CA: Morgan Kaufmann, 1993.
In preparation.

128
S.P. Singh.
The efficient learning of multiple task sequences.
In J.E. Moody, S.J. Hanson, and R.P. Lippman, editors, Advances in Neural Information Processing Systems 4, San Mateo, CA, 1992. Morgan Kaufmann.

129
A. W. Smith and D. Zipser.
Learning sequential structures with the real-time recurrent learning algorithm.
International Journal of Neural Systems, 1:125-131, 1990.

130
S. Solla.
The emergence of generalization ability in learning.
In D. S. Lippman, J. E. Moody, and D. S. Touretzky, editors, Advances in Neural Information Processing Systems 5,. San Mateo, CA: Morgan Kaufmann, 1993.
In preparation.

131
S. A. Solla.
Accelerated learning in layered neural networks.
Complex Systems, 2:625-640, 1988.

132
P. Stolorz, A. Lapedes, and X. Xia.
Predicting protein secondary structure using neural net and statistical methods.
Journal of Molecular Biology, 225:363-377, 1992.

133
G.Z. Sun, H.H. Chen, C.L. Giles, Y.C. Lee, and D. Chen.
Connectionist pushdown automata that learn context-free grammars.
In Proceedings of the International Joint Conference on Neural Networks IJCNN-90, volume 1, page 577. Lawrence Erlbaum, Hillsdale, N.J., 1990.

134
G. Tesauro.
Practical issues in temporal difference learning.
In D. S. Lippman, J. E. Moody, and D. S. Touretzky, editors, Advances in Neural Information Processing Systems 4,, pages 259-266. San Mateo, CA: Morgan Kaufmann, 1992.

135
N. Tishby, E. Levin, and S. Solla.
Consistence inference of probabilities in layered networks: predictions and generalizations.
In Proceedings of the International Joint Conference on Neural Networks IJCNN-90, volume 2, pages 403-409. Lawrence Erlbaum, Hillsdale, N.J., 1990.

136
A. C. Tsoi and R. A. Pearson.
Comparison of three classification techniques, CART, C4.5 and multi-layer perceptrons.
In D. S. Lippman, J. E. Moody, and D. S. Touretzky, editors, Advances in Neural Information Processing Systems 3, pages 963-969. San Mateo, CA: Morgan Kaufmann, 1991.

137
V. Vapnik.
Principles of risk minimization for learning theory.
In D. S. Lippman, J. E. Moody, and D. S. Touretzky, editors, Advances in Neural Information Processing Systems 4,, pages 831-838. San Mateo, CA: Morgan Kaufmann, 1992.

138
C. v.d. Malsburg.
Technical Report 81-2, Abteilung für Neurobiologie, Max-Planck Institut für Biophysik und Chemie, Göttingen, 1981.

139
R. Wahnsiedler.
Diploma thesis, 1992.
Institut für Informatik, Technische Universität München.

140
C. Watkins.
Learning from Delayed Rewards.
PhD thesis, King's College, 1989.

141
R. L. Watrous and G. M. Kuhn.
Induction of finite-state languages using second-order recurrent networks.
Neural Computation, 4:406-414, 1992.

142
A. S. Weigend, B. A. Huberman, and D. E. Rumelhart.
Predicting the future: A connectionist approach.
International Journal of Neural Systems, 1:193-209, 1990.

143
P. J. Werbos.
Beyond Regression: New Tools for Prediction and Analysis in the Behavioral Sciences.
PhD thesis, Harvard University, 1974.

144
P. J. Werbos.
Generalization of backpropagation with application to a recurrent gas market model.
Neural Networks, 1, 1988.

145
H. White.
Learning in artificial neural networks: A statistical perspective.
Neural Computation, 1(4):425-464, 1989.

146
R. J. Williams.
Reinforcement-learning in connectionist networks: A mathematical analysis.
Technical Report 8605, Institute for Cognitive Science, University of California, San Diego, 1986.

147
R. J. Williams.
Toward a theory of reinforcement-learning connectionist systems.
Technical Report NU-CCS-88-3, College of Comp. Sci., Northeastern University, Boston, MA, 1988.

148
R. J. Williams.
Complexity of exact gradient computation algorithms for recurrent neural networks.
Technical Report Technical Report NU-CCS-89-27, Boston: Northeastern University, College of Computer Science, 1989.

149
R. J. Williams and J. Peng.
An efficient gradient-based algorithm for on-line training of recurrent network trajectories.
Neural Computation, 4:491-501, 1990.

150
R. J. Williams and D. Zipser.
Experimental analysis of the real-time recurrent learning algorithm.
Connection Science, 1(1):87-111, 1989.

151
R. J. Williams and D. Zipser.
A learning algorithm for continually running fully recurrent networks.
Neural Computation, 1(2):270-280, 1989.

152
R. J. Williams and D. Zipser.
Gradient-based learning algorithms for recurrent networks and their computational complexity.
In Back-propagation: Theory, Architectures and Applications. Hillsdale, NJ: Erlbaum, 1992.

153
D.J. Willshaw and C. von der Malsburg.
How patterned neural connections can be set up by self-organization.
Proc. R. Soc. London B, 194:431-445, 1976.

154
A. Wyner and J. Ziv.
Fixed data base version of the Lempel-Ziv data compression algorithm.
IEEE Transactions Information Theory, 37:878-880, 1991.

155
R. S. Zemel and G. E. Hinton.
Discovering viewpoint-invariant relationships that characterize objects.
In D. S. Lippman, J. E. Moody, and D. S. Touretzky, editors, Advances in Neural Information Processing Systems 3, pages 299-305. San Mateo, CA: Morgan Kaufmann, 1991.

156
D. Zipser.
A subgrouping strategy that reduces learning complexity and speeds up learning in recurrent networks.
Neural Computation, 1(4):552-558, 1989.

157
D. Zipser.
Recurrent network model of short-term active memory.
Neural Computation, 3(2):179-193, 1991.

158
J. Ziv and A. Lempel.
A universal algorithm for sequential data compression.
IEEE Transactions on Information Theory, IT-23(5):337-343, 1977.



Juergen Schmidhuber 2003-02-20


Related links in English: Recurrent networks - Fast weights - Subgoal learning - Reinforcement learning and POMDPs - Unsupervised learning and ICA - Metalearning and learning to learn
Deutsche Heimseite