









Artificial Intelligence
History
Highlights and Outlook: AI Maturing and Becoming a Real Formal Science
 
Modern Computer Science and AI
root in the prewar work of
Goedel,
Turing,
and Zuse
 

Until 2000 or so, most AI systems were limited
and based on heuristics. In the
new millennium
a new type of universal AI
has gained momentum. It is mathematically sound, combining
theoretical computer science and probability theory to
derive optimal behavior for robots and other systems embedded
in a physical environment.
 
In 1931, Goedel
layed the foundation of Theoretical Computer Science
 

He published the first universal formal language
and showed that math itself is either flawed or allows for unprovable but
true statements. Some mistakenly thought this proves that AIs will always be inferior to humans.
(Around the same time,
Lilienfeld and
Heil patented the first transistors.)
 
In 1936, Turing reformulated Goedel's result and Church's extension
thereof
 

To do this, he introduced the Turing machine, which became the main tool of CS theory.
In 1950 he invented a subjective test to decide whether something is intelligent
 
From 19351941, Zuse built the first working programcontrolled computers
 

In the 1940s he devised the first highlevel programming language,
and wrote the first chess program
(back then chessplaying was considered an intelligent activity).
Soon afterwards, Shannon published information theory, and
Shockley et al.
reinvented
Lilienfeld's
transistor (1928)
 
McCarthy coined the term "AI" in the 1950s. In the 60s, general AI theory started
with Solomonoff's universal predictors
 

But failed predictions of
humanlevel AI with just a tiny fraction of the
brain's computing power discredited the field.
Practical AI of the 60s and 70s was dominated by rulebased expert systems and Logic Programming
 
In the 1980s and 90s, mainstream AI married probability theory (Bayes nets etc)
 

"Subsymbolic" AI became popular, including neural nets (McCulloch & Pitts, 40s; Kohonen,
Minsky & Papert, Amari, 60s; Werbos, 70s; many others),
fuzzy logic (Zadeh, 60s),
artificial evolution
(Rechenberg, 60s, Holland, 70s), "representationfree" AI (Brooks),
artificial ants (Dorigo, Gambardella, 90s),
statistical learning theory & support vector machines (Vapnik & others)
 
In the 1990s and 2000s, much of the progress in practical AI was due to better hardware,
getting roughly 1000 times
faster per dollar per decade
 

In 1995, a fast visionbased robot car by Dickmanns autonomously drove
1000 miles in traffic at up to 120 mph. Japanese labs (Honda, Sony)
and TUM built
famous humanoid robots.
There were few if any fundamental software breakthroughs;
improvements / extensions of already existing algorithms seemed less impressive and
less crucial than hardware advances.
For example, chess world champion Kasparov was beaten
by a fast IBM computer running a fairly standard algorithm.
Rather simple but computationally expensive
probabilistic methods for speech recognition,
statistical machine translation,
computer vision, optimization etc.
started to become feasible on fast PCs.
 
In the new millennium the first
mathematical theory of universal AI emerged,
combining "old" theoretical computer science and "ancient" probability theory to
derive optimal behavior for embedded rational agents.
A sign that AI
is maturing and becoming a real formal science!
 

Will this mathematically sound type of New AI
and its associated optimality theorems be considered a milestone 50 years
from now? Some IDSIA links on this topic:
Universal AI,
Goedel machines,
Universal search.
Less universal methods (but still more general than most traditional AI)
learn programs and sequences
(as opposed to conventional input/output mappings)
with feedback networks and obtain the
best known results in some
applications (more best results).
To exploit them to the max, however,
we'd like to have substantially faster computers.
By 2020 affordable computers will match brains in terms of
raw computing power.
We think
the necessary selfimproving AI software will not lag far behind.
Is history about to converge?
 



Compare: J. Schmidhuber.
Celebrating 75 years of AI  History and Outlook: the Next 25 Years.
In Proc. 50th Anniversary of AI, p. 2941, LNAI 4850, Springer, 2007.
arxiv.org/abs/0798.4311.
 



Compare:
J. Schmidhuber. The New AI is general and mathematically rigorous. Front. Electr. Electron. Eng. China
(DOI 10.1007/s114600100105z), 2010. PDF of draft.
 
Squares are local links to AIrelevant topics





















