Turing OversoldAbstract. Alan M. Turing made certain significant contributions to computer science. However, their importance and impact is often greatly exaggerated, at the expense of the field's pioneers. It's not Turing's fault, though. Partially redundant companion articles of 2021: (1) 90th anniversary celebrations—1931: Kurt Gödel, founder of theoretical computer science, shows limits of math, logic, computing, and artificial intelligence.^{[GOD21,a,b]} (2) 80th anniversary celebrations—1941: Konrad Zuse completes the first working generalpurpose computer, based on his 1936 patent application.^{[ZUS21,a,b]} (3) 375th birthday of Gottfried Wilhelm Leibniz, founder of computer science.^{[LEI21,a,b]} We need to stop overselling the contributions of individual scientists at the cost of their peers. Instead let's heed the recent call in the journal Nature: Let us "value those who ensure that science is selfcorrecting."^{[SV20]} As those who know me can testify, finding and citing original sources of scientific and technological innovations is important to me.^{[NASC19][HIN][T21]} The present article is offered as a resource for those who share this inclination. Here I will focus on foundational concepts of computer science, sometimes erroneously attributed to the British mathematician Alan M. Turing, especially in the Anglosphere. He did make certain very significant contributions to the field. But they have been greatly inflated by various sources, as I shall outline below.^{[VAR13][HAI14][NASC47][T20]} For example, it was claimed that Turing founded computer science.^{[BEA]} He didn't.^{[NASC45][HAI14][GOD21][ZUS21,a,b][LEI21,a,b]} Even the peerreviewed British journal Nature previously published overblown claims such as: Turing's 1936 paper provided the "theoretical backbone" for all computers to come.^{[NASC7]} It didn't. A popular British movie^{[IMI]} even went so far as to say he invented the computer (see also^{[COP11]}). He didn't.^{[VAR13][HAI14][ZUS21,a,b]} A prominent historian wrote:^{[HAI14]} "I could fill many columns doing nothing more than skewering ridiculous things written about Turing, many of them by people who ought to know better." A good entry point to this discussion is ACM's official justification^{[T19]} of the 2018 A.M. Turing Award which says: "The ACM A.M. Turing Award, often referred to as the "Nobel Prize of Computing," carries a $1 million prize, with financial support provided by Google, Inc. It is named for Alan M. Turing, the British mathematician who articulated the mathematical foundation and limits of computing." Although ACM's statement on Turing is not plain wrong, it is greatly misleading, like some of its other statements.^{[T21]} ACM correctly states that Turing "articulated the mathematical foundation and limits of computing." However, many have done this over the decades, and when it comes to credit assignment in science, the important question is: Who did it first? It wasn't Turing. Turing published five years after the groundbreaking work of the Austrian mathematician Kurt Gödel (1931)^{[GOD][GOD21,a,b]} and one year after the American Alonzo Church (1935),^{[CHU]} Turing's advisor. Of course, he cited both of them in his 1936 paper^{[TUR]} (whose corrections appeared in 1937). With that in mind, let us look more closely at the birth of modern computer science. In the early 1930s, Gödel became the founder of modern theoretical computer science.^{[GOD][GOD34]} He introduced a universal coding language (193134)^{[GOD][GOD34][GOD21,a,b]} based on the integers. It allows for formalizing the operations of any digital computer in axiomatic form. Gödel used it to represent both data (such as axioms and theorems) and programs^{[VAR13]} (such as proofgenerating sequences of operations on the data). He famously constructed formal statements that talk about the computation of other formal statements—especially selfreferential statements which imply that they are not decidable, given a computational theorem prover that systematically enumerates all possible theorems from an enumerable set of axioms. Thus he identified fundamental limits of algorithmic theorem proving, computing, and any type of computationbased AI.^{[GOD][BIB3][MIR](Sec. 18)[T21](Sec. IV)} Much of early AI in the 1940s70s was actually about theorem proving^{[ZU48][NS56]} and deduction in Gödel style through expert systems and logic programming. Gödel built on earlier work by Gottlob Frege^{[FRE]} (who introduced the first formal language, 1879), Georg Cantor^{[CAN]} (diagonalization trick, 1891), Thoralf Skolem^{[SKO23]} (who introduced primitive recursive functions, 1923) and Jacques Herbrand^{[GOD86]} (who identified limitations of Skolem's approach). These authors in turn built on the formal Algebra of Thought (1686) by Gottfried Wilhelm Leibniz^{[L86][WI48][LEI21,a,b]} which is deductively equivalent^{[LE18]} to the later Boolean Algebra of 1847.^{[BOO]} Leibniz, one of the candidates for the title of "father of computer science," has been called "the world's first computer scientist"^{[LA14]} and even "the smartest man who ever lived."^{[SMO13]} He was not only the first to publish infinitesimal calculus (1684),^{[L84]} but also the first to describe principles of binary computers (1679) governed by punch cards.^{[L79][L03][LA14][HO66]} (Binary number encodings per se are much older than that, dating back to ancient Egypt. The algorithmic part on binary arithmetic operations is relatively new though. Compare Juan Caramuel y Lobkowitz' publication on binary encodings (1670) and Thomas Harriott's unpublished papers.^{[IN08][SH51]}) Moreover, Leibniz pursued an ambitious and highly influential project to answer all possible questions through computation, with the help of a universal language and a general calculus for reasoning: the Characteristica Universalis & Calculus Ratiocinator^{[WI48][LE18]} (inspired by the 13th century scholar Ramon Llull^{[LL7]}). In the early 1930s, however, Gödel's famous result showed the limitations of Leibniz' project.^{[GOD21]} In 1935, Church derived a corollary / extension of Gödel's result by demonstrating that Hilbert & Ackermann's Entscheidungsproblem (decision problem) does not have a general solution.^{[CHU]} To do this, he used his alternative universal coding language called Untyped Lambda Calculus, which forms the basis of the highly influential programming language LISP. In 1936, Turing introduced yet another universal model which has become perhaps the most wellknown of them all (at least in computer science): the Turing Machine.^{[TUR]} He rederived the abovementioned result.^{[CHU][TUR][HIN]} Of course, he cited both Gödel and Church in his 1936 paper^{[TUR]} (whose corrections appeared in 1937). In the same year of 1936, Emil Post published yet another independent universal model of computing,^{[POS]} also citing Gödel and Church. Today we know many such models. Church proved that his model had the same expressive power as Gödel's. Nevertheless, according to Wang,^{[WA7496]} it was Turing's work (1936) that convinced Gödel of the universality of both his own approach (193134) and Church's (1935). Interestingly, Church's notion of computation already included termination, Turing's did not—see also the famous halting problem named by Davis in 1958^{[HLT58]} and formulated by Kleene in 1952.^{[HLT52][HLT21]} Like Gödel's original universal language of 193134, the Turing Machine and the Post Machine of 1936 are theoretical and impractical constructs that cannot directly serve as a foundation for realworld computers. Remarkably, Konrad Zuse's patent application^{[ZU36]} for the first practical general programcontrolled computer also dates back to 1936 (see below). What exactly did Turing^{[TUR]} and Post^{[POS]} do in 1936 that hadn't been done earlier by Gödel^{[GOD][GOD34]} (193134) and Church^{[CHU]} (1935)? There is a seemingly minor difference whose significance emerged only later. Many of Gödel's instruction sequences were series of multiplications of numbercoded storage contents by integers. Gödel did not care that the computational complexity of such multiplications tends to increase with storage size. Similarly, Church also ignored the contextdependent spatiotemporal complexity of the basic instructions in his algorithms. Turing and Post, however, adopted a traditional, reductionist, minimalist, binary view of computing. Their machine models permitted only very simple elementary binary instructions with constant complexity, like the early binary machine model of Leibniz (1679)^{[L79][LA14][HO66]} and Zuse's 1936 patent application.^{[ZU36]} They did not exploit this—Turing used his (quite inefficient) model only to rephrase the results of Gödel and Church on the limits of computability. Later, however, the simplicity of these machines made them a convenient tool for theoretical studies of complexity. (I also happily used and generalized them for the case of neverending computations.^{[ALL2]}) Some sources claim that Turing at least laid the foundations of Artificial Intelligence.^{[BEA]} Does this make any sense? In 1948, Turing wrote up ideas related to artificial evolution^{[TUR1]} and learning artificial neural networks, whose architectures date back at least to 1943^{[MC43]} (but see closely related prior work in physics since the 1920s^{[L20][I25][K41][W45]}). Turing did not publish though, which explains his paper's lack of impact. In 1950, he proposed a simple but famous subjective test for evaluating whether a computer is intelligent. In 1956, at a conference in Dartmouth, the expression "AI" was coined by John McCarthy as a new label for related research. The first conference on AI, however, had taken place already in 1951 in Paris,^{[AI51][BRU4][BRO21]} when much of what's now called AI was still called Cybernetics, with a focus very much in line with modern AI based on deep neural networks.^{[DL12][DEC]} Even long before that, in 1914, the Spaniard Leonardo Torres y Quevedo had become the 20th century's first pioneer of practical AI when he built the first working chess end game player (back then chess was considered as an activity restricted to the realms of intelligent creatures). The machine was still considered impressive decades later when the AI pioneer Norbert Wiener played against it at the 1951 Paris conference.^{[AI51][BRU14][BRO21]} Konrad Zuse, however, had more general chess routines already in 1945.^{[KNU]} (He also applied his pioneering Plankalkül programming language to theorem proving in 1948,^{[ZU48]} long before Newell & Simon's work of 1956.^{[NS56]}) As mentioned above, the pioneer of modern AI Theory was Gödel himself (193134), who identified the limits of AI & math & computing, and laid formal foundations of AI based on automatic theorem proving and deduction through expert systems (some erroneously thought he also showed that humans are superior to AIs^{[BIB3]}). In sum, the foundational achievements in AI greatly predate Turing's. The Gödel Prize for theoretical computer science is named after Gödel. The currently more lucrative ACM A. M. Turing Award was created in 1966 for contributions "of lasting and major technical importance to the computer field." It is funny—and at the same time embarrassing—that Gödel (19061978) never got one, although he not only laid the foundations of the "modern" version of the field, but also identified its most famous open problem "P=NP?" in his famous letter to John von Neumann (1956).^{[GOD56][URQ10]} Neither did Church (19031995). There would have been plenty of time though—these pioneers died years after the award was introduced. Likewise, Konrad Zuse (19101995) never got a Turing award despite having created the world's first working programmable general computer 193541. His abovementioned patent application of 1936^{[ZU3638][Z36][RO98][ZUS21,a,b]} described the digital circuits required by programmable physical hardware, predating Claude Shannon's 1937 thesis on digital circuit design.^{[SHA37]} Zuse also created the first highlevel programming language in the early 1940s.^{[BAU][KNU]} Zuse's Z3 computer of 1941 was a working practical device, not just a theoretical and impractical pen & paper construct like those of Gödel (193134), Church (1935), Turing (1936), and Post (1936). Ignoring the inevitable storage limitations of any physical computer, the physical hardware of Z3 was indeed universal in the modern sense of the theory papers above—simple arithmetic tricks can compensate for its lack of an explicit conditional jump instruction of the type "IF ... THEN GOTO ADDRESS ..."^{[RO98]} (BTW, it is much more awkward to program Turing or Post machines which also do not allow for "modern" conditional jumps—they do not even have numbered memory addresses to which an instruction pointer could jump). Where does Z3 fit in the history of computing hardware? The first known gearbased computational device was the Antikythera mechanism (a kind of astronomical clock) in Ancient Greece over 2000 years ago. 1.5 millennia later, Peter Henlein still made conceptually similar machines—albeit smaller—namely, the first miniaturized pocket watches (1505). But these devices always calculated the same thing, e.g., divide minutes by 60 to get hours. The 1600s brought more flexible machines that computed answers in response to input data. The first dataprocessing gearbased special purpose calculator for simple arithmetics was built in 1623 by Wilhelm Schickard, one of the candidates for the title of "father of automatic computing," followed by the superior Pascaline of Blaise Pascal (1642). In 1673, the aforementioned inevitable Leibniz designed the first machine (the step reckoner) that could perform all four arithmetic operations, and the first with a memory.^{[BL16]} He also described principles of binary computers (1679)^{[L79][LA14][HO66][L03][LEI21,a,b]} employed by virtually all modern computers including Zuse's Z3. Z3 used electromagnetic relays with visibly moving switches. The first electronic special purpose calculator (whose moving parts were electrons too small to see) was the binary ABC (US, 1942) by John Atanasoff (the "father of tubebased computing"^{[NASC6a]}). Unlike the gearbased machines of the 1600s, ABC used tubes—today's machines use the transistor principle patented by Julius E. Lilienfeld in 1925.^{[LIL12]} But unlike Z3, ABC was not freely programmable. Neither was the electronic Colossus machine by Tommy Flowers (UK, 194345) used to break the Nazi code^{[NASC6]} (see below). On the other hand, the concept of programs was wellknown by then. Perhaps the world's first programmable machine was an automatic theatre made in the 1st century^{[SHA7a][RAU1]} by Heron of Alexandria (who apparently also had the first known working steam engine—the Aeolipile). The energy source of his programmable automaton was a falling weight pulling a string wrapped around pins of a revolving cylinder. Complex instruction sequences controlling doors and puppets for several minutes were encoded by complex wrappings. The 9th century music automaton by the Banu Musa brothers in Baghdad^{[BAN][KOE1]} used pins on a revolving cylinder to store programs controlling a steamdriven flute—compare AlJazari's programmable drum machine of 1206.^{[SHA7b]} The first commercial programcontrolled machines (punch cardbased looms) were built in France around 1800 by JosephMarie Jacquard and others—perhaps the first "modern" programmers who wrote the world's first industrial software. In this context it seems worth pointing out the difference between programs and the more limited usergiven input data of the 1600s mentioned above. Programs are instruction sequences stored on some medium, e.g., on punch cards, and can be run again and again, without human intervention. Over time the physical objects required to store programs have become lighter. Ancient machines stored them on rotating cylinders; Jacquard stored them on cardboard; Zuse stored them on 35mm film, today we often store them using electrons and magnetizable material. Jacquard's programs (around 1800) were not yet of the general purpose kind. They inspired Ada Lovelace and her mentor Charles Babbage (UK, circa 1840). He planned but was unable to build a programmable, general purpose computer (only his nonuniversal special purpose calculator led to a working 20th century replica). Unlike Babbage, Zuse (193641) used Leibniz' principles of binary computation (1679)^{[L79][LA14][HO66][L03][LEI21,a,b]} instead of traditional decimal computation. This greatly simplified the hardware. The first general working programmable machine built by someone other than Zuse was Howard Aiken's still decimal MARK I (US, 1944). The much faster decimal ENIAC by Eckert and Mauchly (1945/46) could be programmed by rewiring it. Today, however, most computers are binary like Z3. Both data and programs were stored in electronic memory by the "Manchester baby" (Williams, Kilburn & Tootill, UK, 1948)^{[COP15]} and the 1948 upgrade of ENIAC, which was reprogrammed by entering numerical instruction codes into readonly memory.^{[HAI14b]} Already in 193638, however, Zuse may have been the first to suggest to put both program instructions and data into memory.^{[ZU3638]} It was pointed out that none of the computers built during the 1940s were influenced in any way by Turing's 1936 theoretical paper, except perhaps his own ACE design.^{[HAI14]} We note once more that Gödel's formal model of 193134^{[GOD][GOD34]} also encoded/stored data (e.g., axioms) and programs (sequences of operations on the data) and results (e.g., theorems) in the same integerbased storage (now known as Gödel numbering), just like Turing and Post later stored them in bit strings. Of course, the behavior of any Turing machine or Post machine or any other digital computer can be formalized in Gödel's original universal model (this inspired my selfreferential Gödel Machine^{[GM6]}). It should be noted, however, that we are using modern terminology here: Neither Gödel (1931) nor Church (1935) nor Turing (1936) mentioned the term "program" in their papers (albeit Zuse's 1936 patent application frequently referred to a "Rechenplan"^{[ZU36]} which means "program"). Similarly, the term "stored program" first appeared later in the context of electronic storage.^{[HAI14b]} Turing published pioneering work in bioinformatics.^{[TUR2]} However, his greatest impact came probably through his contribution to cracking the Enigma code,^{[NASC7]} used by the German military during the Second World War. He worked with Gordon Welchman at Bletchley Park in the UK. The famous codebreaking Colossus machine,^{[NASC6]} however, was designed by Tommy Flowers (not by Turing). The British cryptographers built on earlier foundational work by Polish mathematicians Marian Rejewski, Jerzy Rozycki and Henryk Zygalski who were the first to break the Enigma code (none of them were even mentioned in the movie^{[IMI]}). Some say this was decisive for defeating the Third Reich.^{[NASC7]} To summarise, many have contributed to the theory and practice of computing. Nevertheless, Turing's contributions were significant, although he was standing on the shoulders of giants.^{[GOD][GOD3421a][CHU][HAI14][VAR13][BRU14][NASC47][LEI21,a,b][ZUS21,a,b]} His famous 1936 paper diligently cites the pioneering work of Gödel (1931) and Church (1935). It seems unlikely that the great scientist he was would ever approve of the overblown claims about him so easily dismissing the work of his colleagues.
