Recurrent Networks in Engineering Paul Werbos, NSF Dozens of experimental studies have proved that recurrent nets are essential to high levels of capability in most real-world engineering applications. Problems of dissemination, software, and complexity, plus a certain amount of sheer laziness and politics, are arguably the main reason we do not see more of them. NSF (www.eng.nsf.gov/ecs) would like to see more proposals in this area. Discrete time (t/t+1) formulations may be best for most engineering analysis and application. The most general recurrent net can be expressed in that formulation as a general Time-Lagged Recurrent Network (TLRN) wrapped around a "static" network containing Simultaneous Recurrence. Ford Research, among others, has shown that TLRNs are essential to challenging applications like the development of clean air chips. At Yale2003, Feldkamp and Prokhorov of Ford showed that TLRNs trained by backprop through time (BTT) perform state estimation as well as the most complicated particle filter methods, and significantly better than extended Kalman filters. At WCCI2002, on simpler examples (which may not be representative of many more difficult problems), they showed roughly equal performance between conventional TLRNs/BTT versus LSTM (which is structurally the same as one of the variants of "sticky neuron" I proposed in 1990 based on some evidence about Purkinje cells). Good state estimation, in turn, is a crucial prerequisite to "adaptive" capability in brain-like reinforcement designs. (See ebrains.la.asu/~nsfadp and www.iamcm.org/publications.) Simultaneous recurrence, by contrast, offers superior ability to learn tricky nonlinear mappings. (See adap-org 9806001 in nlin-sys at http:/arXiv.org.) It has not caught on in the neural network world proper because of slow learning problems -- but we see a way around that, now in process. A recent extension of Simultaneous Recurrence, the (patented) ObjectNet, offers hope of coherent prediction and management of systems like global electric power grids, where the huge numbers of variables and special connectivity drastically limit what can be done with feedforward networks. BTT is usually the practical choice for training, but is not brain-like. Our Error Critic Design (based on an extension of DHP reinforcement learning) offers hope of a workable brain-like system -- but the only implementation to date was a control design described in Prokhorov's PhD thesis based on the cerebellum-model extension of the Error Critic described in the Handbook of Intelligent Control.