Despite the immense amount of this writing and talking, and despite the repetitive over-reporting on some aspects of AI, other aspects of AI remain relatively obscure. One such aspect is its history.
Some readers may be surprised to learn how old the concept of AI is.
One early example comes from the literary world: in 1920 Czech playwright Karel Capek presented R.U.R.: Rossum’s Universal Robots, with a revised edition of the script appearing in 1921. Capek’s text explicitly considers many of the questions now being asked about AI. He invented the word ‘robot’ and refined the concept of what a robot might be. His fictional version of a robot was made of synthetic flesh instead of transistors and steel. Concerning unemployment created by AI, one of the characters in Capek’s play says, “Yes, people will be out of work, but by then there’ll be no work left to be done. Everything will be done by living machines.” Later in the plot, another character observes: “They’ve ceased to be machines. They’re already aware of their superiority.” Capek’s robots have superior memory and superior physical strength. Although not addressed directly in the text, they presumably also have superior calculating ability. In the final line of the play, a robot who’s led the revolution against human beings addresses his fellow robots: “Robots of the world! The power of man has fallen! A new world has arisen: the Rule of the Robots!”
In addition to raising some of the core questions about AI, Capek also exemplifies the murky boundary between science fiction and concrete examples of technological development in the real world: On the one hand, fiction can sometimes be surprisingly accurate in foreshadowing actual technological developments; on the other hand, science fiction can foster preconceptions which make it difficult accept new developments which diverge from those preconceptions.
A second early example is a famous academic paper titled “Computing Machinery and Intelligence” written by Alan Turing and published in journal Mind in October 1950 (Vol. LIX, No. 236). He was born in Manchester, England, and spent much of his life there. This text is widely read in university courses in psychology and philosophy. Turing describes how intelligence in a machine might manifest itself: “Intelligent behaviour presumably consists in a departure from the completely disciplined behaviour involved in computation, but a rather slight one, which does not give rise to random behaviour, or to pointless repetitive loops.” He also explores at length the concepts of training, teaching, and learning as they apply to AI. In so doing, Turing precisely identified three of the concepts which received so much attention as the first quarter of the twenty-first century drew to a close: teaching, learning, and training. He wrote: “An important feature of a learning machine is that its teacher will often be very largely ignorant of quite what is going on inside.” Further, “It will not be possible to apply exactly the same teaching process to the machine as to a normal child.”
Turing was 75 years ahead of his time. Discussions nearly a century after his death include marking the boundaries between LLM, AI, AGI, and ASI. Among the central AI concepts he examines is the definition of ‘intelligence’ which will decide when those boundaries have been crossed. One option is to embrace a behaviorist psychology, which asserts that if the machine makes intelligent statements and behaves intelligently, then it is intelligent: intelligence is the appearance of intelligence. Turing’s writings produced the idea of the “Turing test” in which the question is whether or not a computer can fool a human being: can a chatbot convince someone that it is a human?
A third early example is Joseph Weizenbaum’s program named ELIZA. Weizenbaum was born in Germany and spent the first thirteen years of his life there. After moving to the United States, he eventually did undergraduate and graduate work at Wayne State University in Detroit. Weizenbaum began designing the software that would become ELIZA around 1964, and ELIZA seems to have been functioning at a high level by 1966. ELIZA was what would later come to be known as a ‘chatbot’ and engaged with human beings in real-time live conversations. It achieved some of its most significant results when it was programmed with a script which caused it to adopt the persona of a psychologist. (In the computer jargon of the time, ELIZA was a program, and various “scripts,” which one might now call ‘data sets’, containing vocabulary and directions for using the vocabulary.) To test how the program operated, Weizenbaum invited his colleagues to have a ‘conversation’ with ELIZA using the DOCTOR script. Even though his colleagues knew that ELIZA was nothing more than a few lines of code, and even though they knew how ELIZA worked, they nonetheless entered into deep conversations, as if they were talking to a therapist, revealing details about their personal relationships, and confiding about their life problems. They spent more time with ELIZA than Weizenbaum needed or had requested. One person even asked Weizenbaum to leave the room because the conversation contained private details. He worried about the future implications of conversational interactions between humans and computers.
The surprising results from ELIZA point to three topics in AI. First, even though the people knew that they were talking to an inanimate device, and that the responses were simply the results of plugging vocabulary words into paradigms, they nevertheless trusted the computer, sharing confidential information with it; the people bonded emotionally with ELIZA. Second, Weizenbaum was moved to speak about the eventual dangers of AI, given the unanticipated degree to which the people trusted and confided in ELIZA. Third, the results offered a new perspective on Turing’s test. If a piece of software like ELIZA can convince a user that it is human, that might well not be enough evidence to conclude that ELIZA is intelligent. The code for ELIZA was relatively short and simple, even by 1966 standards.
These three narratives suffice to show that the roots of AI, and AI itself, are older than twenty-first century media reports suggest. The eventual capabilities of AI were apparent, as were concerns about potential dangers arising from those abilities.