A pre-history of Artificial Intelligence

Sreshta Putchala
7 min readOct 7, 2020

To create intelligent beings can be traced back to Greek mythology — Hephaestus, son of Hera, constructed humanlike creations regularly in his forge. As a present from Zeus to Europa (a human princess), he created Talos out of bronze to guard and defend the island of Crete, where Europa lived. His most famous creation is Pandora. At the behest of Zeus, he did so who wanted to punish humankind for accepting Prometheus’s gift of fire. Pandora is sent to Earth with a casket, which she is forbidden to open, but does so, overcome by curiosity, and releases the world’s evils. Disenchanted with human women, Pygmalion is said to have created Galatea out of ivory and fell in love with his creation. Aphrodite, the goddess of love, obliged him by breathing life into this human-made woman. Such stories in which creations of men that come alive abound in the literature.

One of the earliest mechanical contraptions built was by Heron of Alexandria in the 1st century AD when he made water-powered mechanical ducks that emitted realistic chirping sounds. Gradually, as the metal-working skills improved, the line between fact and fiction got blurred.

Medieval progress:

The Europeans were not as affluent as they are now, and many were eking out a living and protecting themselves from the harsh winters. Science, and art, was the forte of the wealthy aristocrats, and scientists had begun to gain the liberty of action and were beginning independent philosophical and scientific investigations. Daedalus, most famous for his artificial wings, created Artificial people. In medieval Europe, Pope Sylvester II (946–1003) has made a statue with a talking head — with a limited vocabulary and a penchant for predicting the future. It gave replies to queries with a yes, or a no, and its human audience did not doubt that some impressive mental activity preceded the answer.

Arab astrologers have constructed a thinking machine called the zairja. The zairja caught afire imagination of a missionary, Ramon Lull (1232–1315), with religious zeal, who decided to build a Christian version called the Ars Magna. Ars Magna was constructed using a set of rotating discs and aimed “to bring reason to bear on all subjects and, in this way, arrive at the truth without the trouble of thinking or fact-finding.”

This period also marks the emergence of elaborate clocks decorated with animated figures, which helped establish the belief that learned men kept artificial servants. In the sixteenth century, the rabbi Judah ben Loew (1520–1609) has created an Artificial man named Joseph Golem, who could only be adequately instructed by the rabbi. Earlier in the century, Paracelsus (1493–1541), a physician by profession, is reputed to have created a homunculus, or a little man.

The art of mechanical creatures flourished in the following years. The Archbishop of Salsburg built a working model of an entire miniature town, operated by water power. In 1644, the French engineer, Isaac de Caus, designed a menacing metal owl and a set of smaller birds chirping around it, except when the owl was looking at them. In the eighteenth century, Jacques de Vaucanson (1709–1782) of Paris constructed a mechanical duck that could bend its neck, move its wings and feet, and eat and digest food. Vaucanson himself was careful not to make strong claims about the duck’s innards, asserting that he was only interested in imitating the larger aspects. This approach has been echoed often in artificial intelligence. For example, chess-playing machines do not make decisions the way human players do.

A creation of doubtful veracity was the chess automaton, the Turk, created by the Hungarian Baron Wolfgang von Kempelen (1734–1804). The gadget was demonstrated in many courts in Europe with great success. The automaton played good chess, but that was because there was a diminutive man inside the device.

Fiction took the upper hand once again, and Mary Shelley (1797–1851), an acquaintance of Lord Byron (1788–1824), wrote the classic horror novel, Frankenstein. The character Dr. Frankenstein creates a humanoid that turns into a monster.

The word robot, which means a worker in Czech, first appeared in a play called Rossum’s Universal Robots (RUR) by Karel Oapek (1921) (1890–1938), apparently on the suggestion of his brother Josef (Levy, 2008). It is derived from the word robota, meaning forced labor.

The robots in Capek’s play were supposed to be machines resembling humans in appearance and told to do their master’s bidding, where the robots rebel and destroy the human race. The prospect of human creations running haywire has long haunted many people.

Isaac Asimov, who took up the baton of writing about (fictional) artificial creatures, formulated the three laws of robotics in which he said the following:

  • A robot cannot injure a human being or allow a human being to come to harm through inaction.
  • A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
  • A robot must protect its existence as long as such protection does not conflict with the First or Second Law.

These were laws in the civic sense that Asimov said should be hardwired into robots’ “brains.” A zeroeth law was subsequently added: A robot may not injure humanity, or, by inaction, allow humanity to come to harm.

Computational Backdrop

Artificial intelligence (Al) came into being as soon as the Digital Computers were built. But even before the electronic machine came into existence, Gottfried Leibniz (1646–1716) and Blaise Pascal (1623–1662) had explored the construction of mechanical calculating machines. Charles Babbage (1791–1871) had designed the first stored-program machine. The ability to store and manipulating symbols evoked the possibility of doing so intelligently and autonomously by the machine, and AI became a lodestar for the pioneers of computing.

Charles Babbage (1791–1871), son of a wealthy banker, is considered as the father of Computers. In 1822, he constructed a small machine — the Difference Engine. His more extensive project, the Analytic Engine (Menabrea, 1842), however, never found the funding it deserved. The Analytic Engine’s distinctive feature was that it introduced the stored-program concept. The first realization in the physical form of a stored-program computer was the EDSAC built in Cambridge a century later. The sketch by Menabrea was translated by Lady Ada Augusta (1815–1852), daughter of Lord Byron, who worked closely with Babbage. The programming language Ada has been named after her. She has famously said that “The Analytical Engine has no pretensions whatever to originate anything. It can do whatever we know how to order it to perform.” She did recognize the possibility of representing other things in the “calculating machine,” including letters of the alphabet, notes in music, or even pieces on a chessboard. Ada Lovelace is considered the first computer programmer in the world!

Even while Charles Babbage was designing a mechanical computer, his collaborator Lady Ada Lovelace (1815–1852), had prudently observed that a machine could only do what it is programmed to do. This perhaps was an indicator of the debates that accompanied the foray into artificial intelligence, which led many people to assert that Al’s goal was to mimic human intelligence.

“The fundamental goal of this research is not merely to mimic intelligence or produce some clever fake. Al wants the genuine article; machines with minds” — John Haugeland (1985).

Haugeland also says that he would have preferred Synthetic Intelligence since, for some people, the word artificial intelligence has a connotation of not being real. Other names suggested are Applied Epistemology, Heuristic Programming, Machine Intelligence, and Computational Intelligence. But it is Artificial Intelligence that has stuck both within the scientific community and in the popular imagination.

The modern era!

The field of Al is a culmination of a long series of efforts to build sophisticated machinery in Europe over the last few centuries, along with advances in philosophy, mathematics, and logic in the scientific community. This heralded the next level of development of computing in the next century!

“Artificial Intelligence” is coined by John McCarthy. Along with Marvin Minsky and Claude Shannon, he organized the ‘Dartmouth Conference’ in 1956. The conference is based on the conjecture “that every aspect of learning or any other feature of intelligence can, in principle, be so precisely described, that a machine can be made to simulate it.”

The above goal set the tone for the definition of AI in the scholarly texts on the subject — Feigenbaum and Feldman in 1963, Nilsson in 1971, Newell & Simon in 1972, Raphael in 1976, Winston in 1977), Rich in 1983, and Charniak & McDermott in 1985. Some definitions focus on human intelligence, and some on hard problems.

  • We call programs ‘intelligent’ if they exhibit behaviors that would be regarded as intelligent if they were exhibited by human beings — Herbert Simon.
  • Physicists ask what kind of place this universe is and seek to characterize its behavior systematically. Biologists ask what it means for a physical system to be living. We (in Al) wonder what kind of information-processing system can ask such questions — Avron Barr and Edward Feigenbaum (1981).
  • Al is studying techniques for solving exponentially hard polynomial-time problems by exploiting knowledge about the problem domain — Elaine Rich.
  • Al is the study of mental faculties through computational models — Eugene Charniak and Drew McDermott.

The books “Machines Who Think” by Pamela McCorduck (1973), and “Al: The Very Idea” by John Haugeland (1985) explores the historical and philosophical background in detail.

I will discuss the modern history of Ai in my next blog

--

--