 |
 |
 |
 |
 |
 |
 |
 |
|
 |
Artificial Intelligence
History
Highlights and Outlook: AI Maturing and Becoming a Real Formal Science
| |
Modern Computer Science and AI
root in the pre-war work of
Goedel,
Turing,
and Zuse
| |
 |
Until 2000 or so, most AI systems were limited
and based on heuristics. In the
new millennium
a new type of universal AI
has gained momentum. It is mathematically sound, combining
theoretical computer science and probability theory to
derive optimal behavior for robots and other embedded agents.
And deep learning is driving modern AI applications.
| |
In 1931, Goedel
layed the foundations of Theoretical Computer Science and AI
| |
 |
He published the first universal formal language
to create general computational theorem provers,
and discovered the fundamental limitations of mathematics, computers and AI.
(Around the same time,
Lilienfeld and
Heil patented the first transistors.)
| |
In 1936, Turing reformulated Goedel's result and Church's extension
thereof
| |
 |
To do this, he introduced the Turing machine, which became the main tool of CS theory.
In 1950 he invented a subjective test to decide whether something is intelligent
| |
From 1935-1941, Zuse built the first working program-controlled computers
| |
 |
In the 1940s he devised the first high-level programming language,
and wrote the first chess program
(back then chess-playing was considered an intelligent activity).
Soon afterwards, Shannon published information theory, and
Shockley et al.
re-invented
Lilienfeld's
transistor (1928)
| |
McCarthy coined the term "AI" in the 1950s. In the 60s, general AI theory started
with Solomonoff's universal predictors
| |
 |
But failed predictions of
human-level AI with just a tiny fraction of the
brain's computing power discredited the field.
Practical AI of the 60s and 70s was dominated by rule-based expert systems and Logic Programming, extending Goedel's original work on theorem proving
| |
In the 1980s and 90s, mainstream AI married probability theory (Bayes nets etc)
| |
 |
"Subsymbolic" AI became popular, including neural nets (McCulloch & Pitts, 40s; Kohonen,
Minsky & Papert, Amari, 60s; Werbos, 70s; many others),
fuzzy logic (Zadeh, 60s),
artificial evolution
(Rechenberg, 60s, Holland, 70s), "representation-free" AI (Brooks),
artificial ants (Dorigo, Gambardella, 90s),
statistical learning theory & support vector machines (Vapnik & others)
| |
In the 1990s and 2000s, much of the progress in practical AI was due to better hardware,
getting roughly 100 times
faster per dollar per decade
| |
|
In 1995, a fast vision-based robot car by Dickmanns autonomously drove
1000 miles in traffic at up to 120 mph. Japanese labs (Honda, Sony)
and TUM built
famous humanoid robots.
Chess world champion Kasparov was beaten
by a fast IBM computer running a fairly standard algorithm.
Rather simple but computationally expensive
probabilistic methods for speech recognition,
statistical machine translation,
computer vision, optimization etc.
started to become feasible on fast PCs.
Fundamental breakthroughs in
general purpose deep learning with recurrent neural networks of the 1990s
had to wait for faster computers.
| |
|
 |
|
 |
|
 |
Compare: J. Schmidhuber.
Celebrating 75 years of AI - History and Outlook: the Next 25 Years.
In Proc. 50th Anniversary of AI, p. 29-41, LNAI 4850, Springer, 2007.
arxiv.org/abs/0798.4311.
| |
|
 |
 |
Compare:
J. Schmidhuber. The New AI is general and mathematically rigorous. Front. Electr. Electron. Eng. China
(DOI 10.1007/s11460-010-0105-z), 2010. PDF of draft.
| |
Squares are local links to AI-relevant topics
|
|
 |
 |
|
|
 |
 |
 |
 |
 |
|
 |
 |
 |
 |
 |
 |
 |
 |
|
|