Keywords

1 Introduction

A previous paper suggested a historical model to analyze the early development of computer science in universities [1]. In all local cases studied, computer science stemmed out of numerical analysis. More precisely, computing began as an ancillary technique of applied mathematics. This until the early 1960s, when a cross-fertilization process began as different intellectual and socio-political agenda converged around this new “boundary object”, the computer, hybridizing into a new “science” (arguably) and institutionally into a new discipline: Computer science or informatique.

The present paper will focus on two such intellectual agenda in the French post-war environment: Mathematical logic and machine translation, one motivated by fundamental queries, the other by practical concernsFootnote 1. It is based on archival research and oral history interviews providing a detailed investigation on the case of France – a mid-size country where computers appeared a few years later than in Britain and in the USA, which makes it unconspicuous regarding spectacular “firsts”, but perhaps historically more representative of the average emergence process of computer science.

This narrative differs markedly from the representation of computer science as an offspring of mathematical logic, as a development of the breakthroughs made during the 1930s by Kurt Gödel, Alonzo Chuch, Alan Turing and others in the theory of computability. While this representation holds true in a few important cases, particularly in some American universitiesFootnote 2, the vast majority of people who built early computers or started to teach how to use them had hardly any knowledge of these mathematical logic concepts. This problem has already been discussed by historians who studied pioneer countries, particularly by Michael Mahoney, Thomas Haigh, Edgar G. Daylight and others [58]. On the spectrum of the different histories of computing, France constitutes a case where mathematical logic played no part at all in the early development of this technology.

I. “The desert of French logic”

Was the “Turing machine”, to sum up a common narrative, a decisive source of inspiration for electronic computers designers? When faced with such an assumption, a historian spontaneously relates it with the more general linear process of innovation, a mental model which spread after the second world war, stressing the role played by basic science in the development of revolutionary technologies (such as atomic energy), then became hotly controversial as other actors highlighted the role played in the same innovations by engineering and incremental progress. Meditating on the vision of the abstract “Turing machine” materializing into hardware between 1936 and 1949, a theologian could even understand it as a secular version of the Christian process by which the Verb became Flesh. To remain in the computing realm, I tend to consider this model as a founding myth of computer science, an a posteriori reconstruction, more than an accurate historical account. It holds true only for theoretical computer science, which blossomed from the 1960s on.

Research suggests rather a late encounter than a filiation process between logic and computing. As in most other countries, computing in the 1950s emerged in a few French universities as an ancillary technique of applied mathematics, mainly of numerical analysis, to answer the needs of electrical engineering, fluid mechanics and aeronautics. In the science faculties of Grenoble and Toulouse, then of Lille and Nancy, at the CNRS’ Institut Blaise Pascal in Paris, small teams of applied mathematicians and electronic engineers endeavoured to get unreliable vacuum tube calculators to process algorithms written in binary code or assembler: Their concerns were far removed from the high abstractions of mathematical logic.

Of course we must distinguish between several branches of mathematical logic. Boolean and propositional algebra was taught and used as soon as the first binary, digital calculators were developed in French companies and laboratories, around 1950. At the Bull company, engineers specialized in circuit theory and design were commonly called “logicians”. A British logician, Alan Rose, published from 1956 on several notes in the Comptes-Rendus de l’Académie des Sciences de Paris: In 1956 on propositional calculus, and, in 1959, on an “ultrafast” calculator circuit [9, 10]. Binary circuit logic was common knowledge among computer designers by 1960, and would soon be implemented in CAD software based on graph theory, pioneered in France by Claude Berge.

Things went differently with the theories of computability and recursive functions which had “revolutionized” mathematical logic in the 1930s, but remained almost ignored in France until the mid-1950s, and did not seem to interact with computing until the early 1960s. The present paper aims at describing their progressive reception (particularly of the Turing machine concept), through individual trajectories and institutional developments.

In the beginning, we should rather talk of a non-reception. A specific feature of the French mathematical scene was that logic had nearly disappeared since Jacques Herbrand’s premature death in 1931. Moreover, it was banned from mathematics by the Bourbaki group, and rejected toward philosophy [11]. Erring “in the desert of French logic” was the feeling of a doctoral student desperately seeking for a supervisor in this field around 1950 [12]. Until 1954, the Comptes Rendus de l’Académie des Sciences, a veritable mirror of French academic research, contain no mention whatsoever of recursive functions or computability theory. The same goes with specialized mathematical periodicals, including university journals.

Only a couple of savants, Jean-Louis Destouches and Paulette Février, worked on the logical foundations of physics [13, 14]. Février also published translations of foreign logicians with whom she had friendly relations (E.W. Beth, Hao Wang & Robert Mc Naughton, A. Robinson, A. Tarski [15]), in a collection of books she directed at a Paris publishing house, and organized a series of international conferences: Applications scientifiques de la logique mathématique (1952) [16], Les Méthodes formelles en axiomatique (logique mathématique), Le Raisonnement en Mathématiques (1955), etc. Thanks to her, research in logic remained present in France, at least as an imported productFootnote 3.

Note that Alan Turing himself was familiar with France and had visited the country repeatedly in the 1930s and after the war, yet he did not seem to have any contact with French mathematicians [18]. Only three mentions of the Turing machine appeared in France in the first half of the 1950s, with little or no apparent effect. Let us evoke them briefly.

In January 1951, at the CNRS international conference on calculating machines, a delegate from the British National Physical Laboratory, F. M. Colebrook, introduced his presentation of the ACE computer by mentioning Turing’s paper of 1936 – “a most abstract study which was in no regard a preview of modern automatic digital calculators”, yet attracted the interest of the NPL director [19] (Colebrook headed the construction of the ACE computer initially designed by Turing at NPL). This mention raised no visible echo in the 600 pages of the conference proceedings, nor in the memory of the participants. There is no hint, in the CNRS archives, that Turing himself was invited at all. In short, this considerable cybernetics meeting established no link between theories of computability and calculating machines.

More important perhaps, at the end of the same year, the Bourbaki seminar invited a German-French-Israeli logician, Dov Tamari, to speak about “Machines logiques et problèmes de motsFootnote 4. Tamari described the Turing machine and remarked that the term was misleading – it was essentially logical schemes representing a simplified “ideal calculating man”. It belonged to pure mathematics and offered a new perspective on algorithms. Yet Tamari noticed that Turing’s theory might have a “possible application in the field of calculating machines”. In short, these two glimpses of the Turing machine were very far from presenting it assertively as the model for modern computers. And Tamari’s lectures received no visible echo in the French mathematical community.

The Frenchman who was the most likely to grasp the implications of Turing’s discoveries, François-H. Raymond, an electronics engineer with a deep mathematical culture, heard of Turing only after his company had designed its first computers. Let us focus for a moment on this micro case, in the light of a recently discovered volume of technical reports. After the war, Raymond had been deeply impressed by the EDVAC report of von Neumann and Goldstine, and had created a start-up company, Société d’électronique et d’automatisme (SEA), to develop computing and automation devices. In November 1949, he wrote an internal note, the first sketch for a stored-program machine in France [21]. This note described briefly the architecture of a computer, provided von Neumann’s table of order codes and gave an example of a numerical application in this code. Anecdotically, it was at about that time that von Neumann, travelling in Europe, paid a visit to the SEA.

This study was developed over the following years in a set of technical reports, exploring solutions for the design of a big computer, CUBA, which would eventually be installed five years later in the French Army’s central laboratory. A young mathematician, Claude Lepage, who had attended the CNRS conference of 1951 (was he the only listener who caught the mention of Alan Turing?), was commissioned to imagine principles of programming. Starting with reports from the von Neumann and EDSAC teams, Lepage compared the merits of the different programming methods, and embarked on “rationalizing” them to elaborate a better one [22].

By 1952, Lepage mastered the topic well enough to propose exploring new computer structures, still in a dialogue with the works conducted at Princeton and Cambridge (UK)Footnote 5. His aim was to escape path dependency (to use the vocabulary of present historians of technologiesFootnote 6): computers are not bound to follow the old organisation model of computing bureaus or of office machines, “as early automobiles conserved the silhouette of the horse carriage.” If only, because their application field is much wider: “There is a net change when we consider the machine from a general informational point of view, that is as a device made for transmitting, after a transformation, a certain quantity of information.” This was the first time Lepage went beyond his point of view as a mathematician designing a calculator. In this report, Lepage considered the problem from the fundamentally logical point of view, that of the machine “of Professor Thuring”, which he described briefly by mentioning the 1936 article on computable numbers: “a device which circulates and transforms words (collections of a finite number of symbols belonging to a denumerable set)”, words which present two sorts of properties, those linked to the state and those linked to the location.

So Turing’s influence appears here, through a paragraph in a technical report within a small company in a Parisian suburb, hence very limited (I did not find anything similar in the archives of Bull, the major French computer manufacturer then). Turing’s influence did not intervene at the design stage of SEA’s first computers, but merely as an inspiration for a future program of investigation on possible architectures. And as an inspiration for a remarkable change of perception, from computers as calculators to information-processing automata, and from code to language, a change which Raymond explicited in conferences in the mid-1950s in Paris and MilanFootnote 7. Strangely, the young mathematician who found Turing’s paper inspiring for his reflexion, eventually disappeared soon after from the nascent computing scene: Lepage wrote internal SEA reports until about 1957, then left the company and none knows what became of him afterwards. Still the inspiration remained, and SEA never ceased to explore novel computer architectures during the next decade.

2 The Mid-1950s: A Revival of Mathematical Logic

The first paper of importance dedicated at that time to automata theory and computability, in France, was presented in 1956 in Paris by a Swedish cybernetician, Lars Löfgren, at the International Conference on Automatic Control organised at CNAM by F.-H. RaymondFootnote 8. Lars Löfgren worked then at the Stockholm Institute for Defense Research. His paper surveyed and discussed particularly the programmatic articles of A. Turing, “On computable numbers […]” (1936) and “Computing machinery and intelligence”, Mind (1950), of J. von Neumann, “The general and logical theory of automata” (1951), and of C.E. Shannon et J. McCarthy, “Automata studies” (1956). Starting from the practical concerns of engineers regarding the limits of what is automatable, the reliability of circuits and coding errors, he explained the usefulness of these theories, which would become more necessary with the growing complexity of automatic systems. This paper, given in English in a grand conference gathering many French pioneers of the discipline, then published in 1959, seems to have had little echo in French-speaking territories, if we judge by the fact that no French publication ever quoted it. We can only suppose that this veritable introduction to automata theory was read, without being quoted, from 1959 on, and contributed to introduce these concepts into the culture of French informaticiens and automaticiens?

Yet mathematical logic had started a revival in 1955, when Bourbakist Henri Cartan invited the Austrian-American Georg Kreisel to teach in Paris. At the same time, Polish-American logician Alfred Tarski was enticed (perhaps through Paulette Février) to give a series of conferences at the Institut Henri Poincaré. Simultaneously, three French doctoral students – two mathematicians, Daniel Lacombe and Jean Porte, and a philosopher, Louis Nolin – dared to embrace this marginal matter. Let us introduce the first two, to have a glimpse at their trajectories (we will meet later with the third man).

Daniel Lacombe graduated from Ecole Normale Supérieure in 1945, and initially studied number theory and other mathematical themes well established in the French school. In 1955 he started to publish brief texts on recursivityFootnote 9, likely under the influence of Kreisel with whom he co-signed two papers. After a sabbatical year at the IAS in Princeton, he presented in 1960 a complete overview on “La théorie des fonctions récursives et ses applications” (75 pages), reviewing Gödel’s, Church’s and Herbrand’s theorems, Turing’s machine, Kleene’s works, etc. The only French author he quoted was Jean Porte, which confirms that there was no other. The introduction stressed that the theory of recursive functions was “à la base de la majorité des résultats intéressants obtenus en Logique mathématique au cours des trente dernières années”, in other words a paradigm in this branch of mathematics. This considerable article also mentioned briefly that this theory was useful for the formal representation of electronic calculators, which in turn stimulated reflexions on the old, intuitive concept of calculation. Lacombe was not seeking to “sell” this theory to computer specialists, however the fact that he exposed it in the Bulletin de la Société Mathématique de France allowed to touch numerical analysts as well as pure mathematicians [30].

Jean Porte studied logic within philosophy, in which he graduated in 1941. Then he took mathematics while participating in the resistance in the Toulouse region. In 1949 he joined the French statistics institute (INSEE) where he invented the catégories socio-professionnelles for the 1954 census – an essentially empirical work. Meanwhile Porte began research in mathematical logic and presented a paper on modal logic at a 1955 conference on Reasoning in Mathematics [31]. This conference marked a renaissance of mathematical logic in France, particularly as the French admitted that logic problems could be expressed in algebraic form and that mathematicians were interested [32]. In 1956 Porte proposed “A simplification of Turing’s theory” at the first international Cybernetic conference in Namur (Belgium) [33]. This paper reveals that at least one Frenchman had read the major works by Church, Curry, Gödel, Kleene, Post, Robinson, Rosenblum and Turing on computability, lambda-calculus and recursive functions theory. It is also interesting as Porte was addressing a Cybernétique audience, which still included specialists of computers (who would soon keep cybernetics at bay as a set of vague speculations). Yet Porte’s conclusion mentioned no practical implication, even indirectly, of these theories, which might concern them. On the contrary he suggested to “reach an even higher level of abstraction than Turing’s machines”. If he talked to cyberneticians, it was from the balcony of the logicians’ ivory tower.

In 1958 he received a CNRS researcher position, at the Institut Blaise Pascal in Paris, where another philosopher turned logician, Louis Nolin, had just been appointed to manage the computer pool. Porte and Nolin soon began writing programs for the Elliott 402 and IBM 650 computers of the institute. This was the first recorded interaction of logicians with electronic computers in France. Yet we do not have clues about the relationship they possibly established between their research in logic and their practice as programmers.

Even if they did, they remained exceptions for several years. Computer experts were struggling with vacuum tube circuit and magnetic drum problems, or focused on developing numerical analysis, so that computability theories made little sense to them. Their learned society, Association Française de Calcul (AFCAL), created in 1957, reflected these concerns through its journal, Chiffres, and its first meetings, where computability theories remained invisible for several years.

As for mathematical logic, its intellectual status within mathematics remained low. In 1961 a bright young Normalien, Jean-Jacques Duby, had the fancy idea of chosing logic for his doctoral research under Lacombe’s supervision. “The head of mathematics at Ecole normale supérieure, Cartan, was quite fond of me, but when he heard of this weird choice he became apoplectic and didn’t speak to me for weeks!”Footnote 10 Cartan had invited Kreisel in Paris, but could not tolerate that a “real mathematician” among his protégés wandered in this backwater of algebra.

3 The 1960s: A Convergence with Computer Science

Things changed in the early 1960s, when a series of events manifested a convergence between logic and the nascent computer science.

In October 1961, IBM’s European education center at Blaricum (Netherlands) hosted a meeting on the Relationship Between Non-numerical Programming and the Theory of Formal Systems Footnote 11. The initiator was Paul Braffort, a mathematician with a broad curiosity ranging from logic to linguistics, formal poetry, song writing and private jokesFootnote 12. Braffort had created an analogue computing laboratory at the Commissariat à l’énergie atomique, near Paris, and now headed Euratom’s computer center in Brussels, for which he had ordered an IBM system. D. Hirschberg, then scientific advisor at IBM Belgium, offered him by courtesy to use IBM’s facility at Blaricum for whatever meeting he wished to organize. Braffort seized the opportunity to gather logicians and computer scientists.

Several French computer scientists and logicians participated, mostly from Paris. Among the speakers, they heard Noam Chomsky and Marcel-Paul Schützenberger lecture on “The algebraic theory of context-free languages”, and John McCarthy present his vigorous manifesto, “A Basis for a Mathematical Theory of Computation”, which proclaimed the foundation of a new science of computation based on numerical analysis, recursive function theory and automata theory (Fig. 1).

Fig. 1.
figure 1

(Photo: courtesy of P. Braffort).

Meeting on the Relationship Between Non-numerical Programming and the Theory of Formal Systems (October 1961) at IBM’s European education center at Blaricum (Netherlands). P. Braffort & D. Hirschberg. 1st row: Paulette Février (pearl necklace), next to E.W. Beth, and half-masking P. Braffort. 2nd row: M.-P. Schützenberger, P. Dubarle, S.J.

In June 1962, a mathematics conference held at the science faculty of Clermont-Ferrand included sessions on computing and on logic, the latter being represented by a constellation of international stars – Tarski, Beth, Bernays, Rabin, etc. In his keynote address, René de Possel, the head of the Paris computing institute, Institut Blaise Pascal, explained that mathematical logic, hitherto a field of pure speculation, had become useful to mathematics in general and to information processing in particularFootnote 13. De Possel stressed that Von Neumann, “the first promoter of electronic computers”, was also a logician; and that, at a humbler level, programmers proved more efficient when they knew some logic – “to my great astonishment”, De Possel confessed (very likely with the examples of Porte and Nolin in mind). With Von Neumann, Turing and others had emerged a general theory of machines, which interests computer designers as well as users. It appeared in several new application fields. While attempts to make machines reason are still embryonic, actual work on machine translation, automatic documentation, artificial languages and their compilation, revealed problems resorting to mathematical logic and linguistics. “To the point that special courses in logic should be created for this purpose”, concluded De Possel.

Implicit in De Possel’s lecture was a questioning of old disciplinary categories. If even mathematical logic was becoming useful for a matter as technical as computing, what became of the established difference between “pure” and “applied” mathematics? This epistemological question was soon to have a practical side, too, as the CNRS was about to restructure its committee system, and a most controversial problems would arise: If pure and applied mathematics are reshuffled, where should computing go? Should computing be integrated in electronics, or in mathematics? Or should it have an evaluation committee of its own, like a full-fledged science? This problem would eventually agitate the scientific community for a long decade [37].

At the second IFIP congress (Munich, August 1962), a session was devoted to “Progress in the logical foundations of information processing” – a topic not addressed at the first IFIP congress in Paris (1959). John McCarthy hammered again the gospel he was preaching at Blaricum a year before; and an engineer from Siemens, Heinz Gumin, explained why computer designers needed mathematical logic [38]. Among the French delegation (nearly 10 % of the audience), at least a few listeners got the message.

Actually the message was already being spread in the French computing community through its learned society AFCAL. In late 1961 at the AFCAL seminar on symbolic languages, Louis Nolin, who had attended the Blaricum meeting, gave a programmatic lecture. He recommended to design computer languages according to the axiomatic method established in mathematics – Algol being exemplary of this approach. In order to build an algorithm, it was useful to determine first if the function was effectively computable. For this, “computer scientists would be well advised to learn about the solutions elaborated 30 years ago by logicians”Footnote 14. This remark of Nolin, in a way, sums up my whole paper: After a long decade of tinkering, computer scientists in need of theoretical bases found them in the logicians’ work of the 1930s.

Louis Nolin had become De Possel’s assistant and chief programmer at Institut Blaise Pascal, thus he was in a good position to translate words into action. In the autumn of 1962, regular courses of “Logic for programmers”, on the theories of computability and recursive functions, were introduced in the computer science curriculum of the Paris faculty of science at graduate level. A seminar was organized by J.-L. Destouches, assisted by Jean Porte, Daniel Lacombe and a third logician, Roland Fraïssé. Meanwhile, Paulette Février published a translation of A. Grzegorczyk’s classic treaty on recursive functions, and created within the Institut Blaise Pascal a collection of brochures explicitly titled “Logic for the calculator’s use”: Reprints of journal articles, seminar and course texts, doctoral dissertations in logic, were thus made available beyond the tiny circle of French logicians.

From 1963 on, logic was firmly established in the computer science curriculum at the University of Paris’ Institut de Programmation and at the CNRS Institut Blaise Pascal. Beside its intellectual interest for programmers, outlined by Nolin and others, the adoption of logic had an institutional motivation: Computing teachers needed to set up course programs with more formal matters than Fortran training or the physical description of machines, and logic responded perfectly to this quest.

This coincided with topical evolutions. Now equipped with more powerful and more reliable second-generation computers, researchers could address new “crucial problems” – problems likely to shape a scientific discipline: Language compilation, algorithmic complexity, computability, structures of information. Seeking theoretical models, they found them in logic, as well as in other branches of algebra and in formal linguistics. Reciprocally, logicians could use computers, for example to test demonstration procedures.

Other universities followed progressively. Grenoble was practically in phase with Paris, although at a smaller scale, as logic was taught by an astronomer turned linguist, Bernard Vauquois. Vauquois had defended a doctoral thesis in astrophysics, but devoted his deuxième thèse to “Arithmetization of logic and theory of machines”, thus read works by Alan Turing and John Von Neumann regarding computability, logic and formal languagesFootnote 15. In 1959 he was put in charge of a laboratory for machine translation and became the first French member of the Algol committee. While Vauquois soon turned completely to machine translation, he still introduced basic notions and references of mathematical logic in the Grenoble computer science curriculum. The cross-fertilization between various scientific fields in the mid-1960s in Grenoble is well exemplified by the prehistory of the Prolog language, as told by one of its participants [43]: The synergy between two projects – Algol compiling and natural language processing – led young researchers to absorb a wealth of recent international publications on syntax analysis, W-grammars, graph theory, recursive functions and lambda-calculus. This boiling exploration of new avenues geared itself with another soaring movement, artificial intelligence and automatic demonstration, and later led to Prolog and to a novel conception of algorithmics, directly based on mathematical logicFootnote 16.

Jean-Jacques Duby, whom we have seen at odds with Cartan at the Ecole normale supérieure, persevered for a while in logic. Lacombe gave him a paper just published by Hao Wang, who had written a computer program that mechanically proved mathematical logic theorems from Whitehead and Russell’s Principia Mathematica [46], and Duby undertook to write programs in LISP to demonstrate automatically the exercises of Alonzo Church’s text book. Using the big IBM 7090 to this end at IBM France, he caught the attention of Benoit Mandelbrot who headed a scientific unit within IBM Corp. at Yorktown Heights, and soon joined IBM. Duby never completed his doctorate, yet switched to programming languages and systems, and ended up heading a computer science laboratory jointly created by IBM and the University of Grenoble in 1967. He was the first French computer scientist trained in all branches of mathematics, except in numerical analysis [47]Footnote 17.

Soon after Grenoble, other faculties where computing science remained firmly rooted in mathematics joined this convergence movement, particularly Nancy, Lille and Clermont, in conjunction with research on the Algol language [47].

In 1966, the Ministry of National Education defined a new, nationwide masters diploma, Maîtrise d’informatique, including a certificate of “Algebra, mathematical logic, compiler and system theory” [48]. Logic thus switched status, from a marginal intellectual topic to a subdiscipline within an academic curriculum. Which in turn required the University to train and hire logicians.

Boosted by this interaction with an expanding new discipline, mathematical logic flourished again in French universities at the end of the decade. Reciprocally, the alliance between logicians and computer practitioners was a decisive factor in the assertion of computing as a new science. This dynamism was further reinforced by the convergence with another discipline in eruption: Linguistics.

II. From Machine Translation to Computational Linguistics

While advances in logic responded initially to fundamental queries, the machine translation projects which emerged in the 1950s were motivated mainly by practical concerns: How could scientists keep up with the growing flow of publications in different languages? And, even more vital in the context of the Cold War, how could the West gather intelligence on scientific and technical efforts carried in the Soviet block (and vice-versa)? Electronic brains might provide a solution, both as documentary systems and as fast translators. Starting with a few ideas and experiments on both sides of the Atlantic from 1946 on, research on machine translation came to mobilize by 1961 some thirty teams and 4 to 6 million dollars worldwide.

I will only give here a short, sketchy account of a story which is worth a book, and has been analyzed from a linguist’s point of view by Jacqueline LéonFootnote 18. We will also leave aside, for another paper, the research efforts devoted by pioneers of humanistic text processing (lexicography, etc.) and of other linguistic approaches.

French linguists in the 1950s were hardly more receptive to American structuralist explorations than mathematicians to computability theoriesFootnote 19. The Société de Linguistique de Paris, largely dominated by marxist savants, was more influenced by the Russian school of mathematical linguistics. In the rare occasions when they paid attention to the emerging theories of formal linguistics, of Z.S. Harris’ Methods in Structural Linguistics, and later of Chomsky’s revolutionary approach, they either criticized them sharply or misunderstood them – or both. If the first collective book on Machine Translation published in the USA was reviewed in France in 1957, it was not by a linguist but by Jean Porte, the logician turned programmer with whom we made acquaintance in the previous section of this paper [52]. In other words, the method, purpose and stake of formal linguistics which developed across the Atlantic made little sense in the French linguists’ intellectual landscape, and was clearly at odds with their scientific agenda [53].

In this context, research on Machine Translation was not initiated by linguists, but rather by (relative) outsiders in the late 1950s, when the establishment of De Gaulle’s administration favoured long-term policies, R&D investments and collaborations between academic, military and industrial scientists. The initiator was Emile Delavenay, who as director of the Publication Service at UNESCO, was interested in Machine Translation and surveyed international advances in this field. In 1958 he created a working group, and soon an Association pour la traduction automatique des langues (ATALA). The founding congress of the International Federation of Information Processing Societies (IFIP), also held at UNESCO in June 1959 and where there was much talk of machine translation, contributed to open French computer specialists to this field and to other non-numerical applications.

Members of ATALA were a mix of linguists, mathematicians, computer experts or logicians, including a few military engineers and officers – about a hundred members by 1960. Its journal launched in 1960, La Traduction automatique, produced jointly by Bull and IBM France, was a vehicle for the diffusion of American linguistics and of formal language studies in France. Topics ranged from machine translation to automatic documentation and applied linguistics. Simultaneously ATALA created a seminar of quantitative linguistics at Institut Henri-Poincaré, which hosted also the first computer of the Paris University and the seminar of Association Française de Calcul.

A convergence of interests between ATALA, the Army and the Centre National de la Recherche Scientifique (CNRS) led swiftly to the creation of two laboratories in 1959, funded jointly by the Defense and the CNRS, under the common name of the Centre d’Etudes de Traduction Automatique (CETA): One near Paris within the Army’s Central Laboratory under command of a military engineer, Aimé Sestier, the other at the university of Grenoble, headed by astronomer Bernard Vauquois. Both men, in addition to Delavenay, were soon appointed members of the CNRS’ Linguistics Committee, a decision which confirmed the desire of the CNRS directors to shake up the little French linguistic sphere.

Both laboratories hired or trained computer engineers to serve the machine, and specialists of languages – rather practitioners of Russian and other languages than academic linguists – to develop translation methods. Both hoped to rapidly develop techniques for translating Russian into French, in order to keep track of Soviet scientific and technical publications in real time, and to achieve operational results by 1965. The belief in a quick feasibility of machine translation was based altogether on technoptimism, on the ignorance of computer engineers regarding linguistic constraints and peculiarities, and on the certainty that the Soviets were more advanced than Westerners, in machine translation as in missiles matters.

However the two laboratories were soon to diverge.

At the university of Grenoble, we have already mentioned that the team’s director, Bernard Vauquois, had by personal curiosity acquired a culture in mathematical logic, and had learned to program scientific calculations for his doctoral dissertation in astrophysics. His arrival as professor in Grenoble reinforced the university’s computer science curriculum, where he introduced basic notions of mathematical logic and of the theory and practice of formal languages, particularly of Algol.

While Vauquois did not do research in these fields, he supervised doctoral students who explored the crossroads between them and began to establish a discipline of programming, distinct from numerical analysis. He soon turned his own research completely toward machine translation. His approach was based on the development of a “swivel-language” (langage-pivot), which would function as an intermediary between source and target languages. Note the similarity with the Universal Computer Oriented Language (UNCOL), projected in 1958 by a working group of SHARE and the American Association for Computing Machinery, which aimed at “translating” programs written in high-level languages into machine code [26, p. 60]. Vauquois worked on the hypothetical analogy between translation and compilation, a key issue in programming in the early 1960s.

Throughout the decade, Vauquois’ team resisted the growing waves of criticism from linguists, particularly from the new generation of computational linguists, who objected that automatizing the translation process would require considerable basic research in linguistics before yielding any convincing practical result. The Army’s support lasted until 1967, while CNRS funding would be maintained over the next decades, giving the Centre d’études pour la traduction automatique ample time to adjust its theoretical models and to produce results justifying its survival.

The Paris team, installed at the Army’s Central Laboratory (Laboratoire central de l’Armement) in Montrouge, comprised a mixture of young military engineers trained at Ecole Polytechnique and academics, of numerical analysts and “linguists” – or rather specialists of a given language, particularly Russian. It was headed by a military engineer, Aimé Sestier, who had pioneered scientific computing on the first stored-program computer developed in France, CUBA, and taken courses in numerical analysis and programming in Grenoble. His center hosted men who worked not only on ballistics and operations research, but also on coding, cryptography or literary analysis, and was ready to answer the Defense’s need for machine translation.

The collaboration with the Grenoble team soon appeared to be limited, due altogether to an ill-conceived division of labor between the two centres (morphology/syntax), to the difference of theoretical models, and to the uncompatibility of their computers (an IBM 650 in Paris, a Bull Gamma ET in Grenoble).

Moreover, the naive technoptimism of the beginnings led to cruel disillusions. Tests of ideas on the computer, and criticism by linguists, revealed that human translation was a subtle, complex process, much more difficult to automate than expected if one wanted to do better than automatic dictionaries. In 1959–1960, the US National Science Foundation entrusted an assessment mission on machine translation experiments to Y. Bar-Hillel, a logician who stressed the theoretical fragility of these projects [54]. The Bar-Hillel report was a devastating evaluation, and triggered drastic cuts in machine translation budgets in America and elsewhere – its conclusions being confirmed five years later by the ALPAC report.

In 1962, Sestier read the Bar-Hillel report and, after three years of intense work, decided at once to terminate his machine translation enterprise and to refocus his laboratory on mainstream scientific computing. Most members of the team left for the University or the CNRS. For several polytechnicians interested in research, it was an opportunity to switch to an academic career, first at Institut Blaise-Pascal.

A young military engineer, Jacques Pitrat, took on the research program on artificial intelligence born from the speculations of Alan Turing and the Dartmouth meeting of 1956. He studied formal systems and ambitioned to invent a theorem demonstrator, an “artificial mathematician” in his words, a project to which he eventually devoted his doctorate and the rest of his scientific lifeFootnote 20. Pitrat left the military R&D and joined the Institut Blaise Pascal as a CNRS researcher, bringing with him logic problems linked with automatic demonstration and artificial intelligence. He defended in 1966 a doctoral thesis on a demonstrator of theorems and meta-theorems, the first French doctorate in AI. Jacques Pitrat, Paul Braffort and others interacted on artificial intelligence research at Euratom in Brussels and in a “Leibniz” seminar at Ispra (Italy), leading to a book by Braffort [56] – arguably the first book with “artificial intelligence” in its titleFootnote 21.

Another polytechnician, Maurice Gross, had switched from mathematics to linguistics in 1961 when he went to the USA with a Unesco grant to study at MIT, where he followed the course of Noam Chomsky, and at the University of Pennsylvania where he obtained his PhD under Zellig Harris. It was Maurice Gross who had brought the Bar-Hillel report to his boss Sestier. Personal reflection and the Bar-Hillel report convinced both men that machine translation belonged to engineering and had to be separated from basic research in formal linguistics. Gross reinvested all his personal passion and his former training as a “hard scientist” into linguistics, a field in which he was soon recognized internationally. This move coincided with the creation of several academic teams of linguists interested in computational linguistics, particularly in Paris and in Nancy.

Back in Paris, now as a CNRS researcher at Institut Blaise Pascal, Gross met with three remarkable men with whom he established a long-lasting scientific friendship: Together they interweaved computing, algebra, logic and linguistics on the Parisian intellectual scene. Marcel-Paul Schützenberger, a biologist and mathematician who had written a seminal paper with Noam Chomsky [59], introduced automata theory in France and was a natural leader in the creation of a French school of theoretical computer science. Jean-Claude Gardin, a navy officer turned archeologist after the war, pursued two intellectual agenda: To formalize reasoning in social sciences, close to Pitrat’s artificial intelligence projects; and to develop methods for automatic documentation, practically and theoretically, which led Gardin to create two laboratories and to develop a specific programming language, Snobol (automatic documentation and information retrieval were another research field motivated by practical concerns, which revealed new, fundamental problems). André Lentin was an algebraist interested in formal grammars, with whom Gross wrote a treatise which soon became a classic [60].

These new knowledge objects, theories and problems circulated rapidly, in the boiling intellectual atmosphere and academic expansion of the 1960s. By the mid-1960s they were introduced into the nascent curriculum in computer science, particularly at the Institut de Programmation of the Sorbonne University. Simultaneously, these men “lobbied” with the governmental agency, DGRST, which defined French science policy and awarded research contracts accordingly, so that about one fifth of the funding for computer science, hitherto mainly devoted to machine architecture, technology and numerical analysis, was reoriented to support research on programming languages, compilation, formal linguistics and automatic documentation. Beyond the Blaise Pascal and Henri Poincaré institutes, Schützenberger, Gardin, Gross and Lentin taught these matters in every institution which invited them—the chair of Numerical analysis at the Sorbonne, the Center for quantitative linguistics created at the Sorbonne by Prof. Jean Favard, the chair of Computer Science at the University of Toulouse, the University of Pennsylvania, etc. And in the universities where M. Gross and his friends became professors: Aix-en-Provence, Paris-Vincennes, and finally Paris 7-Diderot where this invisible, but not unconspicuous college of theoretical computer scientists/linguists finally settled in the 1970s (Fig. 2).

Fig. 2.
figure 2

Convergence of linguistic theories and software issues. Perceving common structures between different phenomenons was a founding process of research in computer science. A typical example was the similarity between the translation of natural languages and the compilation of programming languages, as charted here by René Moreau, a military officer turned linguist and chief computer scientist at IBM [61, p. 45].

4 Conclusion

This story may be summed up in terms of timing and receptivity. During the 15 years following the end of the war, French mathematicians and linguists pursued intellectual agenda where the theories of computability developed in other countries since the 1930s, or the algebraisation of linguistics, made little or no sense, thus could not be integrated. It was not a case of “conservatism vs. progress”, but a typical case of different professional groups being “differently progressive” (to use politically-correct jargon).

Then, within a short period in the early 1960s, sweeping changes occurred at an accelerated pace: Mathematical logic became a topic of several publications and doctoral dissertations by French scholars, and met with the growing need of computer experts for theoretical models; formal linguistics became paradigmatic for a fraction of linguists and for researchers interested in programming languages and information structures. This sudden receptivity was largely due to the general expansion of French research and higher education under the Gaullist regime, which favoured the arrival of a new generation of scientists (although men like Lentin and Schützenberger were already professors in the 1950s); and to the progress of computing techniques and capabilities, which allowed for the broadening of applications, particularly non-numerical, while requiring a better understanding of what computing was.

However interesting these conceptual investigations were, it was only the political pressure from a strong socio-economic demand that supported their institutionalization and allowed them to participate in the construction of a new discipline. Reciprocally, they brought a formalized substance to computing techniques which, alone, would never have been able to rise to such an academic status.

Computing was not the first technology which developed long before it received its proper theory. Similar cases had happened in the past in many cases, particularly with the steam engine which inspired thermodynamics, or with electron tubes whose physical principles were fully understood only after the second world war in which they had served by millions, at the time when transistors appeared to replace them. Cases where, in Kuhnian terms, a technical revolution converged with radically new theories to build a paradigm, a disciplinary matrix; yet, the emergence of computer science resulted from a convergence of intellectual agendas whose diversity was unprecedented.