In the 1976 publication of Computer Science as Empirical Inquiry, Allen Newell and Herbert A. Simon present the physical symbol system as a means of perfecting artificial intelligence and bringing it as close to human intelligence as possible. According to the authors, because physical symbol systems “are capable of intelligent action,” then all general intelligent action[s] that humans are capable of must engage a physical symbol system (Newell and Simon 118). Subsequently, just as biologists understand the cell as the basic unit in living organisms, symbols make up the “roots of intelligent action” (Newell and Simon 119). For that reason, as long as an entity manages to portray the ability to manipulate stored symbols, it will meet the requirements set for signs of intelligence. With the given facts in mind, this paper analyzes the philosophical repercussions of Newell and Simon’s hypothesis based on the AI ideologies of Rodney A. Brooks and those of Hubert L. Dreyfus and Stuart E. Dreyfus.
About the symbol entities in Newell and Simon’s article, they can occur as components of a symbol structure connected in one concrete way or another to produce one or more expressions. Notably, the fundamental nature of expression[s] of a system is twofold: “designation” where it is possible to access an object through representation and “interpretation” where through illustrations, a system can perform a process (Newell and Simon 116). For illustration purposes, one can imagine writing systems with letters, words, and sentences. At first, letters make up the symbols and words epitomize the symbol structures or expressions. However, at an advanced level, the words become the symbols, and the sentences become expressions and symbol structures. Now, through words, one can access letters and even understand how to pronounce them because words designate letters; at the same time, through sentences words get meaning because the interpretation of a sentence can help one understand the definition of a term.
In the same way, every system has a collection of symbol structures and processes that function as expressions which can produce even more expressions either through “creation, modification, reproduction or destruction” (Newell and Simon 116). Where human intelligence is concerned, the theorists make two critical implications with their ideologies. Foremost, with the right symbol-processing programs, computers can execute assigned tasks intelligently (Newell and Simon 126). Secondly, humans have the characteristics of a physical symbol system because they exhibit the same “symbolic behavior” (Newell and Simon 119). In other words, the physical symbol system is the computational model of the human cognitive process; after all, the human cognitive activity involved in collective intelligent actions involve symbol manipulations that are similar to those applicable to computer systems.
Newell and Simon’s perceptions appear to recur in the writing of Intelligence without Representation by Rodney A. Brooks in 1991. Just as the former writers argue for the presence of symbol structures made up of symbols that allow expression, Brooks insists on the use of “symbolic descriptions” and the decomposition of central systems to transmit information (144). Traditionally, one central information processing system would utilize the output and input modules to perform specific tasks. In an instance, the observation component would deliver a symbolic description of the world to the action modules, which would in turn use that information to execute an action in reality (Brooks 144). Consequently, the central system assumes the role of a symbolic information processor in a machine. If AI researchers put Brooks’ plan into action, there would be no more distinctions between different systems, and instead, individual but connected layers would be responsible for every activity (Brooks 144-145). Evidently, the removal of a central control point would mean that each of the mentioned layers would have a separate hardware that allows it to serve a different goal. Still, the available connection would allow one layer to signal the others for a particular action and in that sense, there are no chances of having whole systems collapse because there is no central point.
Otherwise dubbed the Creature, the machine would be a “collection of competing behaviors” that are unknown to the entity but apparent to an observer who will readily recognize a pattern of behavior (Brooks 145). For instance, in the case of a dog seeking a bone on the other side of a fence, the top layer will propel the animal to go straight for its treat. Upon reaching the barrier, the lowest layer would come into play and prevent the dog from hitting the fence before the middle layer encourages it to wander and circumvent the barrier. Once on the other side, the top layer will resume and guide the dog to the bone. Through the given example, the Creature exhibits general intelligence and goes on to support Newell and Simon’s representation of the human intelligence operates on typical behaviors that AI researchers can mimic by building symbol structures for computers.
The problem with both theories revolves around the assumptions that Newell and Simon make and Brooks tries to perfect later. Newell and Simon based their ideologies on mere guesses on how a machine would operate if it encompassed a typical system; Brooks insists that rather than test an entire system at a go, it would be best to separate it into layers and test each one individually in the real world. However, current research in AI is yet to meet the criteria set by Newell and Simon as their methods and those of Brooks fail to consider certain factors that are only subject to people.
Hubert L. Dreyfus and Stuart E. Dreyfus think not and in Making a Mind Versus Modeling the Brain, the two men insist that AI researchers make multiple mistaken assumptions that include psychological, biological, and philosophical misconceptions of the mechanisms of human intelligence. In their words, philosophy cannot form strong foundations on which AI can stand because from the start, the branch “ignored or distorted the everyday context of human activity” (Dreyfus and Dreyfus 25). Apparently, AI is degenerating because the solution to the problems it seeks to solve is only available in the world. Apparently, human intelligence resides in the skills that people instinctively and effortlessly use when dealing with everyday situations. In other words, for computers to be able to deal with reality, where there are no manipulations to the environment, they need all the things that human being tend to take for granted. Particularly, there is the common sense that individuals employ when dealing with new phenomena and to date, there are no prospects of computer programs mimicking such abilities. After all, the skills and expertise that emerge when responding to situations are subject to many factors that include utilizing sensory perceptions, speedy mobility, and fast brain activity that is only available through both the heuristic and algorithmic activity (Dreyfus and Dreyfus 19).
Thus said, Newell and Simon would abandon their research work if they were still alive simply because the physical symbol system hypothesis is only applicable in devices capable of expression. For instance, in calculators and typewriter, the pressing of letters and numerals on the keyboard creates symbols on the page. However, the digital computers that the two men targeted with their theory remain subject to other programs outside symbolic descriptions.
Works Cited
Brooks, Rodney A. "Intelligence without Representation." Artificial Intelligence 47 (1991): 139–159. Print.
Dreyfus, Hubert L. Dreyfus and Stuart E. "Making a mind versus modeling the brain: artificial intelligence back at a branch point." Daedalus 117.1 (1988): 15-43. Print.
Simon, Allen Newell and Herbert A. "Computer Science as Empirical Inquiry: Symbols and Search." Communications of the ACM 19.3 (1976): 113-126. Print.