Ray Kurzweil’s The Age of Spiritual Machines depicts the author’s assumptions about the future of computer technology that would apparently evolve in the span of a few decades after publication. According to Kurzweil, future computer progressions will be possible through two means: the “scanning the brain to download it” and the “scanning the brain to understand it” scenarios (125). In other words, Artificial Intelligence (AI) will surpass the human mind by self-replicating nanobots or human minds that are downloadable just as software and in the course of computer systems infiltrating the human body. Eventually, humanity would witness the dawn of “spiritual machines” that would have legal rights as a species and have a consciousness that might surpass that of human beings. To elaborate his ideologies, Ray Kurzweil made predictions about the year 2009 and as evidenced by the commencement of 2016, some of his predictions came true while others failed amid criticisms about AI. Thus said, this paper focuses on the assumptions that Kurzweil makes in The Age of Spiritual Machines and analyzes the repercussions they pose if his predictions become a reality.
About the man's assertions, Kurzweil holds that it is possible to determine the architecture and “implicit algorithms” of the human brain by scanning portions of the organ and determining the interneuron connections of the same (124). As mentioned above, the “scanning the brain to understand it” scenario is a possible path of progression for technology and Kurzweil elaborates his ideas with the use of human societies (125). According to the theorist, there are high chances of success for the understanding of brain activity because the plan is to understand “local brain processes” and not the brain's “global organization” (Kurzweil 125). Concurrently, the successful completion of the given development in computer technology would pave the way for immortality as the storage of memories becomes possible through the “frequent backups” of the mind that would mean only the carbon-based bodies die (129). Notably, the human body is not responsible for memory storage or thinking processes, as the brain is solely responsible for the same. To that end, once the installation process is complete, it would be impossible to differentiate between the new individual and his or her old self (Kurzweil, 125). In fact, even the changed person would have no recollection of the old body and just as in the case of vinyl records humans would only be nostalgic for their original forms when they face infinite years of living. Additionally, as one ought to expect with all new technology, there might be imperfections in the first tries; nonetheless, as Kurzweil informs his readers, further research is capable of dealing with any discovered hitches and perfecting the plan (125). Still, the new person will have the ability to interact with those around him or her because people change every day, and that could explain any noticeable differences.
Ray Kurzweil’s first hypothesis revolves around the study of the human brain to understand its interneuron function and at the same time, analyze thinking processes that AI developers could replicate. Evidently, the theorist’s views stem from the perceptions of “physical symbols system” where, by concentrating on the roots of intelligent action, AI researchers have had considerable success in determining patterns of human intelligence (Newell and Simon 114). The problem with such tactics involves the ethical conundrum that emerges in the study of brains belonging to the recently deceased and where permissible, those of men and women facing “imminent death” (Kurzweil 121). Apparently, because autopsies are medically and culturally acceptable, there is no reason to hinder investigations that hold such promise for the human populace. Contrary to the man’s views, an autopsy seeks to determine a person’s cause of death and is morally acceptable because the examination can detect any foul play and issue justice where needed. Meanwhile, what Kurzweil suggests defies cultural norms by targeting the brains of both the living and the dead for the sole purpose of creating a virtual body for people who understand death as an inevitable end for all creatures that have breath. In other words, Kurzweil’s ideologies would defy nature.
Next, there is the issue of supposing that machines have the capacity to analyze situations and have the same thinking processes that are present in the human brain. Humans have the ability to perform through heuristic and algorithmic activity while machines are only capable of the latter (Dreyfus and Dreyfus 19). Algorithmic activity encompasses what computers can do and with the right programming, the machines can perform such tasks faster than humans can while claiming superiority in the same. For instance, progress in AI allows systems to perform logical operations and rapid calculations that the human brain might not be able to carry out within the same time limit. However, the human brain carries out a different form of thinking, the speculative venture, which draws information from multiple phenomena to execute an action. A perfect explanation is available in Stefano Franchi & Güven Güzeldere’s Mechanical Bodies, Computational Minds where the authors present a philosophical analysis of AI research. Apparently, an individual’s ability to cope with the world is not subject to intelligence but rather “implicit [and] non-verbalizable set of social practices” originating from his or her interactions with the environment (Franchi and Güzeldere 87). Therefore, if the illusion of AI being similar to brain activities persists, societies would become incapable of dealing with any issue and would depend on a non-existent “technological fix” (Franchi and Güzeldere 93). By extension, an increase in the adaptation of AI applications will automatically lead to a decrease in human skills as communities become dependent on the supposedly real intelligence.
Obviously, there is a difference in the processes necessary for the execution of an action in the case of an AI application and the human brain and they both revolve around “bodily involvement[s] with the surrounding world” (Franchi and Güzeldere 87). Contrasting philosophical views provide an explanation for the given statement. On one hand, one group of philosophers promoted conceptual thinking based on rules; for instance, in the case of Kant, an entity would be a dog if it possessed a tail, four legs, and could bark (Franchi and Güzeldere 87). On the other, a second group insisted on human interactions and societal practices being vital to thought processes and actions of a person (Franchi and Güzeldere 87). That is why when dealing with danger a man or woman of sound body and mind would immediately seek safety and leave the AI-based machine behind as it retrieves the correct rule of behavior and applies it appropriately.
In conclusion, the assumptions that Ray Kurzweil makes render the idea of spiritual machines impossible. From an ethical and both the sociological and philosophical standpoints, AI cannot surpass the activities of a human brain because machines cannot hold the same interactions that exist between humans and their environments. Perhaps the identifiable problems come from the fact that AI is the product of interdisciplinary work that utilizes the mutual misuse of computer science and psychology (Franchi and Güzeldere 87). Newell and Simon mention the combination of psychological observations and experiments with computer programming to determine the accomplishment of intelligent activity in machines (119-120). If AI pawns off the already existing patterns of human thought to develop its algorithmic activity then societies have all the right to view machines as a threat instead of a solution to their daily problems. In that sense, the Age of Spiritual Machines will not only be reliant on unethical practices but will also pose threats to the social and psychological components of human existence.
Works Cited
Dreyfus, Stuart E. and Dreyfus Hubert L. "Making a mind versus modelling the brain: artificial intelligence back at a branch point." Daedalus 117.1 (1988): 15-43. Print.
Franchi, Stefano and Güzeldere Güven. Mechanical Bodies, Computational Minds. Cambridge: MIT Press, 2005. Print.
Kurzweil, Ray. The Age of Spiritual Machines. New York: Penguin, 1999. Print.
Newell, Allen and Simon Herbert A. "Computer Science as Empirical Inquiry: Symbols and Search." Communications of the ACM 19.3 (1976): 113-126. Print.