next up previous
Next: About this document ... Up: Source Separation as a Previous: CONCLUSION

Bibliography

1
S. Amari, A. Cichocki, and H.H. Yang.
A new learning algorithm for blind signal separation.
In David S. Touretzky, Michael C. Mozer, and Michael E. Hasselmo, editors, Advances in Neural Information Processing Systems, volume 8. The MIT Press, Cambridge, MA, 1996.

2
H. B. Barlow, T. P. Kaushal, and G. J. Mitchison.
Finding minimum entropy codes.
Neural Computation, 1(3):412-423, 1989.

3
H. G. Barrow.
Learning receptive fields.
In Proceedings of the IEEE 1st Annual Conference on Neural Networks, volume IV, pages 115-121. IEEE, 1987.

4
A. J. Bell and T. J. Sejnowski.
An information-maximization approach to blind separation and blind deconvolution.
Neural Computation, 7(6):1129-1159, 1995.

5
J.-F. Cardoso and A. Souloumiac.
Blind beamforming for non Gaussian signals.
IEE Proceedings-F, 140(6):362-370, 1993.

6
P. Dayan and R. Zemel.
Competition and multiple cause models.
Neural Computation, 7:565-579, 1995.

7
G. Deco and L. Parra.
Nonlinear features extraction by unsupervised redundancy reduction with a stochastic neural network.
Technical Report ZFE ST SN 41, Siemens AG, 1994.

8
B. A. Olshausen; D. J. Field.
Emergence of simple-cell receptive field properties by learning a sparse code for natural images.
Nature, 381(6583):607-609, 1996.

9
D. J. Field.
What is the goal of sensory coding?
Neural Computation, 6:559-601, 1994.

10
P. Földiák and M. P. Young.
Sparse coding in the primate cortex.
In M. A. Arbib, editor, The Handbook of Brain Theory and Neural Networks, pages 895-898. The MIT Press, Cambridge, Massachusetts, 1995.

11
G. E. Hinton, P. Dayan, B. J. Frey, and R. M. Neal.
The wake-sleep algorithm for unsupervised neural networks.
Science, 268:1158-1160, 1995.

12
G. E. Hinton and Z. Ghahramani.
Generative models for discovering sparse distributed representations.
Philosophical Transactions of the Royal Society B, 352:1177-1190, 1997.

13
S. Hochreiter and J. Schmidhuber.
Simplifying neural nets by discovering flat minima.
In G. Tesauro, D. S. Touretzky, and T. K. Leen, editors, Advances in Neural Information Processing Systems 7, pages 529-536. MIT Press, Cambridge MA, 1995.

14
S. Hochreiter and J. Schmidhuber.
Flat minima.
Neural Computation, 9(1):1-42, 1997.

15
S. Hochreiter and J. Schmidhuber.
LOCOCODE.
Technical Report FKI-222-97, Revised Version, Fakultät für Informatik, Technische Universität München, 1998.

16
T. Kohonen.
Self-Organization and Associative Memory.
Springer, second edition, 1988.

17
M. S. Lewicki and B. A. Olshausen.
Inferring sparse, overcomplete image codes using an efficient coding framework.
In M. I. Jordan, M. J. Kearns, and S. A. Solla, editors, Advances in Neural Information Processing Systems 10, 1998.
To appear.

18
R. Linsker.
Self-organization in a perceptual network.
IEEE Computer, 21:105-117, 1988.

19
L. Molgedey and H. G. Schuster.
Separation of independent signals using time-delayed correlations.
Phys. Reviews Letters, 72(23):3634-3637, 1994.

20
M. C. Mozer.
Discovering discrete distributed representations with iterative competitive learning.
In R. P. Lippmann, J. E. Moody, and D. S. Touretzky, editors, Advances in Neural Information Processing Systems 3, pages 627-634. San Mateo, CA: Morgan Kaufmann, 1991.

21
E. Oja.
Neural networks, principal components, and subspaces.
International Journal of Neural Systems, 1(1):61-68, 1989.

22
D. E. Rumelhart and D. Zipser.
Feature discovery by competitive learning.
In Parallel Distributed Processing, pages 151-193. MIT Press, 1986.

23
J. Schmidhuber.
Learning factorial codes by predictability minimization.
Neural Computation, 4(6):863-879, 1992.

24
S. Watanabe.
Pattern Recognition: Human and Mechanical.
Willey, New York, 1985.

25
R. S. Zemel and G. E. Hinton.
Developing population codes by minimizing description length.
In J. D. Cowan, G. Tesauro, and J. Alspector, editors, Advances in Neural Information Processing Systems 6, pages 11-18. San Mateo, CA: Morgan Kaufmann, 1994.



Juergen Schmidhuber 2003-02-25