
Abstract. Alan M. Turing made certain significant contributions to computer science. However, their importance and impact is often greatly exaggerated, at the expense of the field's pioneers. It's not Turing's fault, though. Partially redundant companion articles of 2021: (1) 90th anniversary celebrations—
(2)
(3)
Gottfried Wilhelm Leibniz, founder of computer science
We need to stop overselling the contributions of individual scientists at the cost of their peers. Instead let's heed the recent call in the journal Nature: Let us "value those who ensure that science is self-correcting."[SV20] As those who know me can testify, finding and citing original sources of scientific and technological innovations is important to me.[NASC1-9][HIN][T20] The present article is offered as a resource for those who share this inclination.
Here I will focus on foundational concepts of computer science, sometimes erroneously attributed to the British mathematician Alan M. Turing, especially in the Anglosphere. He did make certain very significant contributions to the field. But they have been greatly inflated by various sources, as I shall outline below.[VAR13][HAI14][NASC4-7][T20]
For example, it was claimed that Turing founded computer science.[BEA] He didn't.[NASC4-5][HAI14][GOD21][ZUS21,a,b][LEI21,a,b] Even the peer-reviewed British journal Nature previously published overblown claims such as: Turing's 1936 paper provided the "theoretical backbone" for all computers to come.[NASC7] It didn't. A popular British movie[IMI] even went so far as to say he invented the computer (see also[COP11]). He didn't.[VAR13][HAI14][ZUS21,a,b] A prominent historian wrote:[HAI14] "I could fill many columns doing nothing more than skewering ridiculous things written about Turing, many of them by people who ought to know better."
A good entry point to this discussion is ACM's official justification[T19] of the 2018 A.M. Turing Award which says: "The ACM A.M. Turing Award, often referred to as the "Nobel Prize of Computing," carries a $1 million prize, with financial support provided by Google, Inc. It is named for Alan M. Turing, the British mathematician who articulated the mathematical foundation and limits of computing."
Although ACM's statement on Turing is not plain wrong, it is greatly misleading, like some of its other statements.[T20] ACM correctly states that Turing "articulated the mathematical foundation and limits of computing." However, many have done this over the decades, and when it comes to credit assignment in science, the important question is: Who did it first? It wasn't Turing.
Turing published five years after the groundbreaking work of the Austrian mathematician Kurt Gödel (1931)[GOD][GOD21,a,b] and one year after the American Alonzo Church (1935),[CHU] Turing's advisor. Of course, he cited both of them in his 1936 paper[TUR] (whose corrections appeared in 1937). With that in mind, let us look more closely at the birth of modern computer science.

In the early 1930s, Gödel became the founder of modern theoretical computer science
universal coding language
1931-34
[GOD][BIB3][MIR](Sec. 18)[T20](Sec. IV)
1940s-70s
Gödel built on earlier work by Gottlob Frege[FRE] (who introduced the first formal language, 1879), Georg Cantor[CAN] (diagonalization trick, 1891), Thoralf Skolem[SKO23] (who introduced primitive recursive functions, 1923) and Jacques Herbrand[GOD86] (who identified limitations of Skolem's approach). These authors in turn built on the formal Algebra of Thought (1686) by Gottfried Wilhelm Leibniz[L86][WI48][LEI21,a,b] which is deductively equivalent[LE18] to the later Boolean Algebra of 1847.[BOO] Leibniz, one of the candidates for the title of "father of computer science," has been called "the world's first computer scientist"[LA14] and even "the smartest man who ever lived."[SMO13] He was not only the first to publish infinitesimal calculus (1684),[L84] but also the first to describe principles of binary computers (1679) governed by punch cards.[L79][L03][LA14][HO66] (Binary number encodings per se are much older than that, dating back to ancient Egypt. The algorithmic part on binary arithmetic operations is relatively new though. Compare Juan Caramuel y Lobkowitz' publication on binary encodings (1670) and Thomas Harriott's unpublished papers.[IN08][SH51]) Moreover, Leibniz pursued an ambitious and highly influential project to answer all possible questions through computation, with the help of a universal language and a general calculus for reasoning: the Characteristica Universalis & Calculus Ratiocinator[WI48][LE18] (inspired by the 13th century scholar Ramon Llull[LL7]). In the early 1930s, however, Gödel's famous result showed the limitations of Leibniz' project.[GOD21]
In 1935, Church derived a corollary / extension of Gödel's result by demonstrating that Hilbert & Ackermann's Entscheidungsproblem (decision problem) does not have a general solution.[CHU] To do this, he used his alternative universal coding language called Untyped Lambda Calculus, which forms the basis of the highly influential programming language LISP.
In 1936, Turing introduced yet another universal model which has become perhaps the most well-known of them all (at least in computer science): the Turing Machine.[TUR] He rederived the above-mentioned result.[CHU][TUR][HIN] Of course, he cited both Gödel and Church in his 1936 paper[TUR] (whose corrections appeared in 1937). In the same year of 1936, Emil Post published yet another independent universal model of computing,[POS] also citing Gödel and Church. Today we know many such models. Church proved that his model had the same expressive power as Gödel's. Nevertheless, according to Wang,[WA74-96] it was Turing's work (1936) that convinced Gödel of the universality of both his own approach (1931-34) and Church's (1935).
Like Gödel's original universal language of 1931-34, the Turing Machine and the Post Machine of 1936 are theoretical and impractical constructs that cannot directly serve as a foundation for real-world computers. Remarkably, Konrad Zuse's patent application[ZU36] for the first practical general program-controlled computer also dates back to 1936 (see below).
What exactly did Turing[TUR] and Post[POS] do in 1936 that hadn't been done earlier by Gödel[GOD][GOD34] (1931-34) and Church[CHU] (1935)? There is a seemingly minor difference whose significance emerged only later. Many of Gödel's instruction sequences were series of multiplications of number-coded storage contents by integers. Gödel did not care that the computational complexity of such multiplications tends to increase with storage size. Similarly, Church also ignored the context-dependent spatio-temporal complexity of the basic instructions in his algorithms. Turing and Post, however, adopted a traditional, reductionist, minimalist, binary view of computing. Their machine models permitted only very simple elementary binary instructions with constant complexity, like the early binary machine model of Leibniz (1679)[L79][LA14][HO66] and Zuse's 1936 patent application.[ZU36] They did not exploit this—Turing used his (quite inefficient) model only to rephrase the results of Gödel and Church on the limits of computability. Later, however, the simplicity of these machines made them a convenient tool for theoretical studies of complexity. (I also happily used and generalized them for the case of never-ending computations.[ALL2])
Some sources claim that Turing at least laid the foundations of Artificial Intelligence.[BEA] Does this make any sense? In 1948, Turing wrote up ideas related to artificial evolution[TUR1] and learning artificial neural networks, whose architectures date back at least to 1943[MC43] (but see closely related prior work in physics since the 1920s[L20][I25][K41][W45]). Turing did not publish though, which explains his paper's lack of impact. In 1950, he proposed a simple but famous subjective test for evaluating whether a computer is intelligent. In 1956, at a conference in Dartmouth, the expression "AI" was coined by John McCarthy as a new label for related research. The first conference on AI, however, had taken place already in 1951 in Paris,[AI51][BRU4][BRO21] when much of what's now called AI was still called Cybernetics, with a focus very much in line with modern AI based on deep neural networks.[DL1-2][DEC]
Even long before that, in 1914, the Spaniard Leonardo Torres y Quevedo had become the 20th century's first pioneer of practical AI when he built the first working chess end game player (back then chess was considered as an activity restricted to the realms of intelligent creatures). The machine was still considered impressive decades later when the AI pioneer Norbert Wiener played against it at the 1951 Paris conference.[AI51][BRU1-4][BRO21]
Konrad Zuse, however, had more general chess routines already in 1945.[KNU] (He also applied his pioneering Plankalkül programming language to theorem proving in 1948,[ZU48] long before Newell & Simon's work of 1956.[NS56]) As mentioned above, the pioneer of modern AI Theory was Gödel himself (1931-34), who identified the limits of AI & math & computing, and laid formal foundations of AI based on automatic theorem proving and deduction through expert systems (some erroneously thought he also showed that humans are superior to AIs[BIB3]). In sum, the foundational achievements in AI greatly predate Turing's.
The Gödel Prize for theoretical computer science is named after Gödel. The currently more lucrative ACM A. M. Turing Award was created in 1966 for contributions "of lasting and major technical importance to the computer field." It is funny—and at the same time embarrassing—that Gödel (1906-1978) never got one, although he not only laid the foundations of the "modern" version of the field, but also identified its most famous open problem "P=NP?" in his famous letter to John von Neumann (1956).[GOD56][URQ10] Neither did Church (1903-1995). There would have been plenty of time though—these pioneers died years after the award was introduced.
Likewise, Konrad Zuse (1910-1995) never got a Turing award despite having created the world's first working programmable general computer 1935-41. His above-mentioned patent application of 1936[ZU36-38][Z36][RO98][ZUS21,a,b] described the digital circuits required by programmable physical hardware, predating Claude Shannon's 1937 thesis on digital circuit design.[SHA37] Zuse also created the first high-level programming language in the early 1940s.[BAU][KNU] Zuse's Z3 computer of 1941 was a working practical device, not just a theoretical and impractical pen & paper construct like those of Gödel (1931-34), Church (1935), Turing (1936), and Post (1936). Ignoring the inevitable storage limitations of any physical computer, the physical hardware of Z3 was indeed universal in the modern sense of the theory papers above—simple arithmetic tricks can compensate for its lack of an explicit conditional jump instruction of the type "IF ... THEN GOTO ADDRESS ..."[RO98] (BTW, it is much more awkward to program Turing or Post machines which also do not allow for "modern" conditional jumps—they do not even have numbered memory addresses to which an instruction pointer could jump). Where does Z3 fit in the history of computing hardware?
The first known gear-based computational device was the Antikythera mechanism (a kind of astronomical clock) in Ancient Greece over 2000 years ago. 1.5 millennia later, Peter Henlein still made conceptually similar machines—albeit smaller—namely, the first miniaturized pocket watches (1505). But these devices always calculated the same thing, e.g., divide minutes by 60 to get hours. The 1600s brought more flexible machines that computed answers in response to input data. The first data-processing gear-based special purpose calculator for simple arithmetics was built in 1623 by Wilhelm Schickard, one of the candidates for the title of "father of automatic computing," followed by the superior Pascaline of Blaise Pascal (1642).
In 1673, the aforementioned inevitable Leibniz designed the first machine (the step reckoner) that could perform all four arithmetic operations, and the first with a memory.[BL16] He also described principles of binary computers (1679)[L79][LA14][HO66][L03][LEI21,a,b] employed by virtually all modern computers including Zuse's Z3.
Z3 used electromagnetic relays with visibly moving switches. The first electronic special purpose calculator (whose moving parts were electrons too small to see) was the binary ABC (US, 1942) by John Atanasoff (the "father of tube-based computing"[NASC6a]). Unlike the gear-based machines of the 1600s, ABC used tubes—today's machines use the transistor principle patented by Julius E. Lilienfeld in 1925.[LIL1-2] But unlike Z3, ABC was not freely programmable. Neither was the electronic Colossus machine by Tommy Flowers (UK, 1943-45) used to break the Nazi code[NASC6] (see below).
On the other hand, the concept of programs was well-known by then. Perhaps the world's first programmable machine was an automatic theatre made in the 1st century[SHA7a][RAU1] by Heron of Alexandria (who apparently also had the first known working steam engine—the Aeolipile). The energy source of his programmable automaton was a falling weight pulling a string wrapped around pins of a revolving cylinder. Complex instruction sequences controlling doors and puppets for several minutes were encoded by complex wrappings. The 9th century music automaton by the Banu Musa brothers in Baghdad[BAN][KOE1] used pins on a revolving cylinder to store programs controlling a steam-driven flute—compare Al-Jazari's programmable drum machine of 1206.[SHA7b] The first commercial program-controlled machines (punch card-based looms) were built in France around 1800 by Joseph-Marie Jacquard and others—perhaps the first "modern" programmers who wrote the world's first industrial software.
In this context it seems worth pointing out the difference between programs and the more limited user-given input data of the 1600s mentioned above. Programs are instruction sequences stored on some medium, e.g., on punch cards, and can be run again and again, without human intervention. Over time the physical objects required to store programs have become lighter. Ancient machines stored them on rotating cylinders; Jacquard stored them on cardboard; Zuse stored them on 35mm film, today we often store them using electrons and magnetizable material.
Jacquard's programs (around 1800) were not yet of the general purpose kind. They inspired Ada Lovelace and her mentor Charles Babbage (UK, circa 1840). He planned but was unable to build a programmable, general purpose computer (only his non-universal special purpose calculator led to a working 20th century replica). Unlike Babbage, Zuse (1936-41) used Leibniz' principles of binary computation (1679)[L79][LA14][HO66][L03][LEI21,a,b] instead of traditional decimal computation. This greatly simplified the hardware. The first general working programmable machine built by someone other than Zuse was Howard Aiken's still decimal MARK I (US, 1944). The much faster decimal ENIAC by Eckert and Mauchly (1945/46) could be programmed by rewiring it. Today, however, most computers are binary like Z3.
Both data and programs were stored in electronic memory by the "Manchester baby" (Williams, Kilburn & Tootill, UK, 1948)[COP15] and the 1948 upgrade of ENIAC, which was reprogrammed by entering numerical instruction codes into read-only memory.[HAI14b] Already in 1936-38, however, Zuse may have been the first to suggest to put both program instructions and data into memory.[ZU36-38] It was pointed out that none of the computers built during the 1940s were influenced in any way by Turing's 1936 theoretical paper, except perhaps his own ACE design.[HAI14]
We note once more that Gödel's formal model of 1931-34[GOD][GOD34] also encoded/stored data (e.g., axioms) and programs (sequences of operations on the data) and results (e.g., theorems) in the same integer-based storage (now known as Gödel numbering), just like Turing and Post later stored them in bit strings. Of course, the behavior of any Turing machine or Post machine or any other digital computer can be formalized in Gödel's original universal model (this inspired my self-referential Gödel Machine[GM6]). It should be noted, however, that we are using modern terminology here: Neither Gödel (1931) nor Church (1935) nor Turing (1936) mentioned the term "program" in their papers (albeit Zuse's 1936 patent application frequently referred to a "Rechenplan"[ZU36] which means "program"). Similarly, the term "stored program" first appeared later in the context of electronic storage.[HAI14b]
Turing published pioneering work in bioinformatics.[TUR2] However, his greatest impact came probably through his contribution to cracking the Enigma code,[NASC7] used by the German military during the Second World War. He worked with Gordon Welchman at Bletchley Park in the UK. The famous code-breaking Colossus machine,[NASC6] however, was designed by Tommy Flowers (not by Turing). The British cryptographers built on earlier foundational work by Polish mathematicians Marian Rejewski, Jerzy Rozycki and Henryk Zygalski who were the first to break the Enigma code (none of them were even mentioned in the movie[IMI]). Some say this was decisive for defeating the Third Reich.[NASC7]
To summarise, many have contributed to the theory and practice of computing. Nevertheless, Turing's contributions were significant, although he was standing on the shoulders of giants.[GOD][GOD34-21a][CHU][HAI14][VAR13][BRU1-4][NASC4-7][LEI21,a,b][ZUS21,a,b] His famous 1936 paper diligently cites the pioneering work of Gödel (1931) and Church (1935). It seems unlikely that the great scientist he was would ever approve of the overblown claims about him so easily dismissing the work of his colleagues.
Thanks to Jack Copeland for inviting me in May 2020 to write a piece about Alan Turing. Thanks to Moshe Vardi, Herbert Bruderer, Jack Copeland, Wolfgang Bibel, Teun Koetsier, Scott Aaronson, Dylan Ashley, Sebastian Oberhoff, Kai Hormann, and several other expert reviewers for useful comments on the contents of the four companion articles.[LEI21,a,b][GOD21,a,b][ZUS21,a,b][TUR21] Since science is about self-correction, let me know under juergen@idsia.ch if you can spot any remaining error. The contents of this article may be used for educational and non-commercial purposes, including articles for Wikipedia and similar sites. This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
[ALL2] J. Schmidhuber (2000). Algorithmic theories of everything. ArXiv: quant-ph/ 0011122. More. See also: Hierarchies of generalized Kolmogorov complexities and nonenumerable universal measures computable in the limit. International Journal of Foundations of Computer Science 13(4):587-612, 2002. PDF. More. See also: The Speed Prior: A New Simplicity Measure Yielding Near-Optimal Computable Predictions. Proc. COLT 2002. PDF. More.
[AI51] Les Machines a Calculer et la Pensee Humaine: Paris, 8.-13. Januar 1951, Colloques internationaux du Centre National de la Recherche Scientifique; no. 37, Paris 1953. [H. Bruderer rightly calls that the first conference on AI.]
[BAN] Banu Musa brothers (9th century). The book of ingenious devices (Kitab al-hiyal). Translated by D. R. Hill (1979), Springer, p. 44, ISBN 90-277-0833-9. [Perhaps the Banu Musa music automaton was the world's first machine with a stored program.]
[BAU] F. L. Bauer, H. Woessner (1972). The "Plankalkül" of Konrad Zuse: A Forerunner of Today's Programming Languages.
[BEA] A. Beavers (2013). Alan Turing: Mathematical Mechanist. In S. B. Cooper, J. van Leeuwen (eds.). Alan Turing: His Work and Impact. Waltham: Elsevier. pp. 481-485.
[BIB3] W. Bibel (2003). Mosaiksteine einer Wissenschaft vom Geiste. Invited talk at the conference on AI and Gödel, Arnoldsheim, 4-6 April 2003. Manuscript, 2003.
[BL16] L. Bloch (2016). Informatics in the light of some Leibniz's works. Communication to XB2 Berlin Xenobiology Conference.
[BOO] George Boole (1847). The Mathematical Analysis of Logic, Being an Essay towards a Calculus of Deductive Reasoning. London, England: Macmillan, Barclay, & Macmillan, 1847.
[BRO21] D. C. Brock (2021). Cybernetics, Computer Design, and a Meeting of the Minds. An influential 1951 conference in Paris considered the computer as a model of—and for—the human mind. IEEE Spectrum, 2021. Link.
[BRU1] H. Bruderer. Computing history beyond the UK and US: selected landmarks from continental Europe. Communications of the ACM 60.2 (2017): 76-84.
[BRU2] H. Bruderer. Meilensteine der Rechentechnik. 2 volumes, 3rd edition. Walter de Gruyter GmbH & Co KG, 2020.
[BRU3] H. Bruderer. Milestones in Analog and Digital Computing. 2 volumes, 3rd edition. Springer Nature Switzerland AG, 2020.
[BRU4] H. Bruderer. The Birthplace of Artificial Intelligence? Communications of the ACM, BLOG@CACM, Nov 2017. Link.
[CAN] G. Cantor (1891). Ueber eine elementare Frage der Mannigfaltigkeitslehre. Jahresbericht der Deutschen Mathematiker-Vereinigung, 1:75-78. [English translation: W. B. Ewald (ed.). From Immanuel Kant to David Hilbert: A Source Book in the Foundations of Mathematics, Volume 2, pp. 920-922. Oxford University Press, 1996.]
[CHU] A. Church (1935). An unsolvable problem of elementary number theory. Bulletin of the American Mathematical Society, 41: 332-333. Abstract of a talk given on 19 April 1935, to the American Mathematical Society. Also in American Journal of Mathematics, 58(2), 345-363 (1 Apr 1936). [First explicit proof that the Entscheidungsproblem (decision problem) does not have a general solution.]
[COP11] B. J. Copeland, D. Proudfoot. "Alan Turing: father of the modern computer." Rutherford Journal 4, 2011-2012.
[COP15] B. J. Copeland, G. Sommaruga. The Stored-Program Universal Computer: Did Zuse Anticipate Turing and von Neumann? In G. Sommaruga, T. Strahm (eds.), Turing's Revolution, Springer, 2015.
[DEC] J. Schmidhuber (AI Blog, 2020). The 2010s: Our Decade of Deep Learning / Outlook on the 2020s.
[DL1] J. Schmidhuber, 2015. Deep Learning in neural networks: An overview. Neural Networks, 61, 85-117. More.
[DL2] J. Schmidhuber, 2015. Deep Learning. Scholarpedia, 10(11):32832.
[FRE] G. Frege (1879). Begriffsschrift: eine der arithmetischen nachgebildete Formelsprache des reinen Denkens. Halle an der Saale: Verlag Louis Nebert. [The first formal language / formal proofs—basis of modern logic and programming languages.]
[GM6] J. Schmidhuber (2006). Gödel machines: Fully Self-Referential Optimal Universal Self-Improvers. In B. Goertzel and C. Pennachin, eds.: Artificial General Intelligence, p. 199-226, 2006. PDF. Preprint arXiv:cs/0309048 (2003). See also: Ultimate Cognition à la Gödel. Cognitive Computation 1(2):177-193, 2009. PDF. More.
[GOD] K. Gödel. Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme I. Monatshefte für Mathematik und Physik, 38:173-198, 1931.
[GOD34] K. Gödel (1934). On undecidable propositions of formal mathematical systems. Notes by S. C. Kleene and J. B. Rosser on lectures at the Institute for Advanced Study, Princeton, New Jersey, 1934, 30 pp. (Reprinted in M. Davis, (ed.), The Undecidable. Basic Papers on Undecidable Propositions, Unsolvable Problems, and Computable Functions, Raven Press, Hewlett, New York, 1965.)
[GOD56] R. J. Lipton and K. W. Regan. Gödel's lost letter and P=NP. Link.
[GOD86] K. Gödel. Collected works Volume I: Publications 1929-36, S. Feferman et. al., editors, Oxford Univ. Press, Oxford, 1986.
[GOD21] J. Schmidhuber (AI Blog, 2021). 90th anniversary celebrations: 1931: Kurt Gödel, founder of theoretical computer science, shows limits of math, logic, computing, and artificial intelligence. This was number 1 on Hacker News.
[GOD21a] J. Schmidhuber (2021). Als Kurt Gödel die Grenzen des Berechenbaren entdeckte. (When Kurt Gödel discovered the limits of computability.) Frankfurter Allgemeine Zeitung, 16/6/2021.
[GOD21b] J. Schmidhuber (AI Blog, 2021). 80. Jahrestag: 1931: Kurt Gödel, Vater der theoretischen Informatik, entdeckt die Grenzen des Berechenbaren und der künstlichen Intelligenz.
[HAI14] T. Haigh (2014). Historical reflections. Actually, Turing did not invent the computer. Communications of the ACM, Vol. 57(1): 36-41, Jan 2014. PDF.
[HAI14b] T. Haigh, M. Priestley, C. Rope (2014). Reconsidering the Stored-Program Concept. IEEE Annals of the History of Computing. IEEE, 2014. PDF.
[HIN] J. Schmidhuber (AI Blog, 2020). Critique of 2019 Honda Prize.
[HO66] E. Hochstetter et al. (1966): Herrn von Leibniz' Rechnung mit Null und Eins. Berlin: Siemens AG.
[I25] E. Ising (1925). Beitrag zur Theorie des Ferromagnetismus. Z. Phys., 31 (1): 253-258, 1925.
[IN08] R. Ineichen (2008). Leibniz, Caramuel, Harriot und das Dualsystem. Mitteilungen der deutschen Mathematiker-Vereinigung. 16(1):12-15.
[IMI] The Imitation Game. Movie, U.K., 2014.
[K41] H. A. Kramers and G. H. Wannier (1941). Statistics of the Two-Dimensional Ferromagnet. Phys. Rev. 60, 252 and 263, 1941.
[KNU] D. E. Knuth, L. T. Pardo (1976). The Early Development of Programming Languages. Stanford University, Computer Science Department. PDF.
[KOE1] [21] T. Koetsier (2001). On the prehistory of programmable machines: musical automata, looms, calculators. Mechanism and Machine Theory, Elsevier, 36 (5): 589-603, 2001.
[L20] W. Lenz (1920). Beiträge zum Verständnis der magnetischen Eigenschaften in festen Körpern. Physikalische Zeitschrift, 21: 613-615.
[L03] G. Leibniz (1703). In: Explication de l'Arithmetique Binaire / Die Mathematischen Schriften, ed. C. Gerhardt, Berlin 1879, vol.7, p.223. English link. [Leibniz documented the binary arithmetics which allow for greatly simplifiying computing hardware and are employed by virtually all modern computers. Binary number encodings per se, however, seem to date back over 4000 years.]
[L79] G. Leibniz. De Progressione dyadica Pars I. 15 March 1679. [Principles of binary computers governed by punch cards.]
[L84] G. Leibniz (1684). Nova Methodus pro Maximis et Minimis. [First publication on infinitesimal calculus.]
[L86] G. Leibniz (1686). Generales Inquisitiones de analysi notionum et veritatum. Also in Leibniz: Die philosophischen Schriften VII, 1890, pp. 236-247; translated as "A Study in the Calculus of Real Addition" (1690) by G. H. R. Parkinson, Leibniz: Logical Papers—A Selection, Oxford 1966, pp. 131-144.
[LA14] D. R. Lande (2014). Development of the Binary Number System and the Foundations of Computer Science. The Mathematics Enthusiast, vol. 11(3):6 12, 2014. Link.
[LE18] W. Lenzen. Leibniz and the Calculus Ratiocinator. Technology and Mathematics, pp 47-78, Springer, 2018.
[LEI21] J. Schmidhuber (AI Blog, 2021). 375th birthday of Leibniz, founder of computer science.
[LEI21a] J. Schmidhuber (2021). Der erste Informatiker. Wie Gottfried Wilhelm Leibniz den Computer erdachte. (The first computer scientist. How Gottfried Wilhelm Leibniz conceived the computer.) Frankfurter Allgemeine Zeitung (FAZ), 17/5/2021. FAZ online: 19/5/2021.
[LEI21b] J. Schmidhuber (AI Blog, 2021). 375. Geburtstag des Herrn Leibniz, dem Vater der Informatik.
[LIL1] US Patent 1745175 by Austrian physicist Julius Edgar Lilienfeld for work done in Leipzig: "Method and apparatus for controlling electric current." First filed in Canada on 22.10.1925. [The patent describes a field-effect transistor. Today, almost all transistors are field-effect transistors.]
[LIL2] US Patent 1900018 by Austrian physicist Julius Edgar Lilienfeld: "Device for controlling electric current." Filed on 28.03.1928. [The patent describes a thin film field-effect transistor.]
We are pleased to announce that Geoffrey Hinton and Yann LeCun will deliver the Turing Lecture at FCRC 2019. Hinton's talk, "The Deep Learning Revolution," and LeCun's talk, "The Deep Learning Revolution: The Sequel," will be presented June 23rd from 5:15-6:30pm in Symphony Hall, Phoenix, Arizona.
No registration or tickets necessary to attend.
ACM named Yoshua Bengio, Geoffrey Hinton, and Yann LeCun recipients of the 2018 ACM A.M. Turing Award for conceptual and engineering breakthroughs that have made deep neural networks a critical component of computing. Bengio is Professor at the University of Montreal and Scientific Director at Mila, Quebec’s Artificial Intelligence Institute; Hinton is VP and Engineering Fellow of Google, Chief Scientific Adviser of The Vector Institute, and University Professor Emeritus at the University of Toronto; and LeCun is Professor at New York University and VP and Chief AI Scientist at Facebook.
Working independently and together, Hinton, LeCun and Bengio developed conceptual foundations for the field, identified surprising phenomena through experiments, and contributed engineering advances that demonstrated the practical advantages of deep neural networks. In recent years, deep learning methods have been responsible for astonishing breakthroughs in computer vision, speech recognition, natural language processing, and robotics—among other applications.
While the use of artificial neural networks as a tool to help computers recognize patterns and simulate human intelligence had been introduced in the 1980s, by the early 2000s, LeCun, Hinton and Bengio were among a small group who remained committed to this approach. Though their efforts to rekindle the AI community’s interest in neural networks were initially met with skepticism, their ideas recently resulted in major technological advances, and their methodology is now the dominant paradigm in the field.
The ACM A.M. Turing Award, often referred to as the “Nobel Prize of Computing,” carries a $1 million prize, with financial support provided by Google, Inc. It is named for Alan M. Turing, the British mathematician who articulated the mathematical foundation and limits of computing.
“Artificial intelligence is now one of the fastest-growing areas in all of science and one of the most talked-about topics in society,” said ACM President Cherri M. Pancake. “The growth of and interest in AI is due, in no small part, to the recent advances in deep learning for which Bengio, Hinton and LeCun laid the foundation. These technologies are used by billions of people. Anyone who has a smartphone in their pocket can tangibly experience advances in natural language processing and computer vision that were not possible just 10 years ago. In addition to the products we use every day, new advances in deep learning have given scientists powerful new tools—in areas ranging from medicine, to astronomy, to materials science.”
"Deep neural networks are responsible for some of the greatest advances in modern computer science, helping make substantial progress on long-standing problems in computer vision, speech recognition, and natural language understanding,” said Jeff Dean, Google Senior Fellow and SVP, Google AI. “At the heart of this progress are fundamental techniques developed starting more than 30 years ago by this year's Turing Award winners, Yoshua Bengio, Geoffrey Hinton, and Yann LeCun. By dramatically improving the ability of computers to make sense of the world, deep neural networks are changing not just the field of computing, but nearly every field of science and human endeavor."
Machine Learning, Neural Networks and Deep Learning
In traditional computing, a computer program directs the computer with explicit step-by-step instructions. In deep learning, a subfield of AI research, the computer is not explicitly told how to solve a particular task such as object classification. Instead, it uses a learning algorithm to extract patterns in the data that relate the input data, such as the pixels of an image, to the desired output such as the label “cat.” The challenge for researchers has been to develop effective learning algorithms that can modify the weights on the connections in an artificial neural network so that these weights capture the relevant patterns in the data.
Geoffrey Hinton, who has been advocating for a machine learning approach to artificial intelligence since the early 1980s, looked to how the human brain functions to suggest ways in which machine learning systems might be developed. Inspired by the brain, he and others proposed “artificial neural networks” as a cornerstone of their machine learning investigations.
In computer science, the term “neural networks” refers to systems composed of layers of relatively simple computing elements called “neurons” that are simulated in a computer. These “neurons,” which only loosely resemble the neurons in the human brain, influence one another via weighted connections. By changing the weights on the connections, it is possible to change the computation performed by the neural network. Hinton, LeCun and Bengio recognized the importance of building deep networks using many layers—hence the term “deep learning.”
The conceptual foundations and engineering advances laid by LeCun, Bengio and Hinton over a 30-year period were significantly advanced by the prevalence of powerful graphics processing unit (GPU) computers, as well as access to massive datasets. In recent years, these and other factors led to leap-frog advances in technologies such as computer vision, speech recognition and machine translation.
Hinton, LeCun and Bengio have worked together and independently. For example, LeCun performed postdoctoral work under Hinton’s supervision, and LeCun and Bengio worked together at Bell Labs beginning in the early 1990s. Even while not working together, there is a synergy and interconnectedness in their work, and they have greatly influenced each other.
Bengio, Hinton and LeCun continue to explore the intersection of machine learning with neuroscience and cognitive science, most notably through their joint participation in the Learning in Machines and Brains program, an initiative of CIFAR, formerly known as the Canadian Institute for Advanced Research.
Select Technical Accomplishments
The technical achievements of this year’s Turing Laureates, which have led to significant breakthroughs in AI technologies include, but are not limited to, the following: