AI Story in the 20th Century — Part 2

Sreshta Putchala
7 min readOct 9, 2020

The genesis of Robotics!

In 1960s, the first autonomous robot (Hopkin’s Beas) was built at Johns Hopkins University. The requirement of being standalone meant that it could not have a built-in computer, which in those days would have been the size of a small room. It was battery-driven, and its main goal was to keep the batteries charged. It ambled along the department’s corridors, occasionally stopping to plug itself into an electric outlet to “feed” itself.

The first robot that could deliberate and do other stuff was built at the Stanford Research Institute over six years starting in 1966. It was named Shakey, apparently owing to “his” unsteady posture. A large computer nearby (there were no small ones) analyzed the data sensed by Shakey’s TV camera, range finder, bump sensors, accepted commands in natural Language typed into its console, and generated and monitored its plans. In 2004, Shakey was inducted into CMU’s Robotic Hall of Fames and Honda’s ASIMO; Astroboy, the Japanese animation of a robot with a soul; C3PO, a character from the “Star Wars” series; and Robby the Robot from MGM’s Forbidden Planet. The Mars Pathfinder Sojourner Rover, Unimate, R2-D2, and HAL 9000 were the first set of robots inducted in the preceding year. Sony’s robotic dog, AIBO, joined them in 2006 and the industrial robotic arm, SCARA. Research in robotics is thriving, commercial development has become feasible, and we will soon expect robotic swarms to carry out search and rescue operations.

Of languages and approaches!

The first planning system, STRIPS (Stanford Research Institute Planning System), was developed around the same time, 1971, by Richard Fikes and Nils Nilsson. So far, the work in planning had adopted a theorem proving approach. Planning is concerned with actions and change, and the important problem was to keep track of what is not changing. This was the well-known Frame Problem described by John McCarthy and Patrick Hayes in 1969. The STRIPS program sidestepped the problem by doing away with time altogether in its representation, and only keeping the given state, making only those modifications that were the effects of actions. It was only later in the last decade of the twentieth century, that time made an appearance again in planning representations, as bigger and faster machines arrived. Other methods emerged. The Frame Problem is incredibly challenging to deal with in an open-world model, where there may be other agencies. McCarthy’s Circumscription method in 1980 laid the foundations of engaging in default reasoning in a changing world.

One system that created quite an impression in 1970 was Terry Winograd’s natural language understanding system, SHRDLU6. It was written in the Micro-Planner language that was part of the Planner series of languages, that introduced an alternative to logic programming by adopting a procedural approach to knowledge representation. However, they did have elements of logical reasoning. SHRDLU could carry out a conversation about a domain of blocks. It would listen to instructions like “pick up the green cone,” and if this were ambiguous, it would respond with “I don’t understand which cone you mean,” and you could say something like “the one on the red cube,” which it would understand. Then if you told it to find a bigger one and put it in the box, it would check back whether by “it” you meant the bigger one and not the one it was holding, and so on.

The optimism generated by SHRDLU generated expectations of computer systems that would soon talk to people, but it was not to be. Language is too rich a domain to be handled by simple means. Interestingly, Winograd’s office mate and fellow research student at MIT, Eugene Charniak, had made the pertinent observation even then that the real task behind understanding language lies in knowledge representation and reasoning. This is the dominant theme when it comes to processing language that we explore in this book. Incidentally, Charniak’s implementations were also done in Micro-Planner.

Roger Schank of Yale University had a similar view. He did a considerable amount of work, hand-coding knowledge into complex systems that could read stories and answer questions about them in an “intelligent” fashion. This effort peaked in the eighties, with the program BORIS written by Michael Dyer. People whose research approach was to build large working systems with lots of knowledge were sometimes referred to as “scruffies” and approached the “neats” who focused on logic and formal algorithms. But the effort of carefully crafting complex knowledge structures was becoming too much, and by then, the pendulum was swinging in Al research towards “neat” general-purpose, problem-solving methods. Machines were getting faster; neural networks were on the ascendant, and knowledge discovery approaches showed promise. A decade or so later, even language research would start taking a statistical hue.

One of Schank’s ideas was that stories follow patterns, like Scripts, and that understanding requires knowledge of such pattern structures. Other people investigated structured knowledge representations, as well. A well-known formalism is the theory of Frames, proposed by Marvin Minsky in 1974, which explored how knowledge exists in structured groups related to other frames by different kinds of relations. The idea of Frames eventually led to the development of Object-Oriented Programming Systems. The related idea was that of Semantic Nets proposed by Richard H Richens and developed by Ross Quillian in the early sixties. John Sowa took up representing knowledge as a network of interconnected networks in his Conceptual Graphs in the eighties. Wordnet7 developed at Princeton University, is a valuable resource for anyone interested in getting at the knowledge behind lexical utterances.

The program Dendral had brilliantly illustrated the importance of knowledge. Developed at Stanford University by Edward Feigenbaum and his colleagues in 1967, Dendral was a program designed to be a chemist’s assistant. It took the molecular formula of a chemical compound, its mass spectrogram, and searched through many possibilities to identify the molecule’s structure. It was able to do this effectively with the aid of large amounts of knowledge gleaned from expert chemists. This resulted in performance that matched that of an expert chemist. And the idea of Expert Systems was born: to educe domain-specific knowledge and put it in the machine. The preferred form of representation was rules, and we often refer to such systems as Rule-Based Expert Systems. A flurry of activity followed. MYCIN, a medical diagnosis system, the doctoral work of Edward Shortliffe, appeared in the early seventies. Its performance was rated highly by the Standford Medical School, but it was never used mainly due to ethical and legal issues that could crop up. Another medical system was the Internist from the University of Pittsburgh. A system to help geologists’ prospects for minerals, Prospector, was developed by Richard Duda and Peter Hart. Buying a computer system was not an off-the-shelf process as it is now, and needed considerable expertise. A system called R1, later named XCON, was built in CMU in 1978, to help users configure DAC VAX systems.

The main problem with the idea of Expert Systems was what is known as the knowledge acquisition bottleneck. Despite the elaborate interviewing protocols that researchers experimented with, domain experts were either unable or unwilling to articulate their knowledge in the form of rules. And like in other domains, the lure of deploying techniques to extract performance directly from delving into data was becoming more appealing. In the nineties, things began to change. The Internet was growing at a rapid pace. Automation was happening in all kinds of places. The problem of getting data was the least of the problems, making sense of it was.

The cusp of the 21st century!

There were other strands of investigation in Al research that had been and were flourishing. Research areas like qualitative reasoning, non-monotonic reasoning, probabilistic reasoning, case-based reasoning, constraint satisfaction, and data mining were evolving. Robots were becoming more capable, and robotic football provided an exciting domain for integrating various ideas. The aging population in many advanced countries is motivating research in robotic companions and caregivers. John Laird had declared that the next level of computer games is the killer application for artificial intelligence. Search engines were ferreting out information from websites in far-flung areas. Machines talking to each other across the world demanded advances in Ontology. Tim Berners-Lee had put forth the idea of the Semantic Web. NASA was deploying Al technology in deep space. Kismet8 at MIT was beginning to smile.

The last frontier is perhaps Machine Learning, the automatic acquisition of knowledge from data, which seems again a possibility, given the increasing ability of machines to crunch through large amounts of data. But not before, as Haugeland says in his book, Al: The Very Idea, before we have solved the knowledge representation problem.

As we have seen, the juggernaut of Al has been lurching along. Like the search methods, Al algorithms embody, Al research has also been exploring various avenues in search of the keys to building an intelligent system. On the way, there have been some dead ends, but there have been equally, if not more, stories of achievement and progress. And on the way, many interesting and useful applications have been developed. Are these systems intelligent? Will we achieve artificial intelligence?

I will discuss in my next blog; as to why earlier efforts have not taken off as they did today. Brace for the “perfect storm”!

--

--