Machines have always been part of humanity. It has separated us from the other forms of life on earth. Through it we have become the dominant species of the planet and with it, our lives have become easier. It is now easier to communicate with anyone, almost anywhere around the world, it is now easier to get someplace without exerting an overabundant amount of effort. As time went by, technology started evolving to meet even greater demands. One of these demands appears in economics.
People wanted more than what the industry could provide. Human labor was becoming insufficient to meet said demand and so something stronger, faster, and more durable had to replace workers for menial task that required minimal effort. Enter robotics. The idea of a serving force that was automaton in nature is not new to humans, but the idea only started garnering attention during the 20th century.
While there is no concrete definition of what a robot truly is, what has been accepted is that a robot is an artificial creation that can follow a set of commands, wherein the set of commands are either hardwired into them, programmed, or controlled by a person for a specific purpose.
It may not be noticed by the people, but the technology that promotes the existence of robots has been enveloping this civilization for at least several decades. The assembly lines of manufacturing companies, the laboratories of schools, bomb disposal droids of the police, and even the probes sent to study the vastness of space. As time went by, robotics has also developed even smarter creations. From robots that can interact with people, robots that can be considered as “pets” and even today, South Korea hopes that by the year 2015-2020, every household will have a robot . With intelligent robots becoming more and more common sight in the world, the questions whether these robots should be treated ethically or should be included in the list of things that should have rights is raised. .
- The Question
There have been a lot of theories floating describing what usually happen during the day when seeing robots do the works of a man are not unfamiliar anymore. Some, like the American inventor and futurist, Raymond Kurzweil, believe that the will come a time when artificial intelligence will surpass human intelligence . There will also come a time when robots will end the cycle of death in humanity. It is even believed that the day will come when men will achieve a state of super intelligence by the means of technology, becoming what has come to be known as the “singularity” .
Others, specifically within the realm of fiction has speculated that the day machine intelligence surpasses human intelligence will be the day human dominance over the world will end. These different theories of technological Armageddon include films and literature such as “The Matrix” series, “The Terminator” series, Karel Capek’s “R.U.R.” even videogames such as Bio ware’s “Mass Effect” series and many others beside.
With so many different theories about what will happen to the world should robots and artificial intelligence develop into their own form of existence, the question that we must now answer is: whether or not we should continue research and development in robotic technology for the good of the world?
- The Known
It is without a doubt that people already know what a robot is. What they are capable of, and what is being done about them now. Unfortunately, most of what we know is just hypothetical in nature as we have not come to the point where robots are enough of a common sight that the laws of ethics surrounding them and technology has become part of the constitutions of different countries around the world. What the common man does know about them is what we have seen in the media.
Theses mostly include sources of science fiction and developments in the field itself. But more often than not, what is written in literature is what is mostly known by people. The most famous of these fictional rules safeguarding robotics was written by author Isaac Asimov for his short story “Runaround” in March 1942 . These three laws once again appear in the collection entitled “I, Robot” also by Asimov in 1950. These three laws have become a staple in reminding people that if robotic technology is to be studied, then there must be a set of rules applied to ensure that things do not go out of hand.
These three laws are as follow:
- “A robot may not injure a human being, or through inaction, allow a human being to come to harm.
- A robot must obey the orders given to it by human beings, except when such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law” .
While these three laws may be the most famous, it is still nonetheless considered a fictional plot device as they cannot be applied to artificial intelligence to date . Regardless however, these three laws have become so well known that they are now the standards of laws for robotics today, thus giving them a good reason to be read by anyone who wishes to study robotics, even if only remotely.
Regardless of their standing in the scientific community however, the three laws show the fact that if robotics is to be pursued, then the end result must be for the betterment of mankind and not its destruction. However, we cannot merely rely on the benevolence of the human being to be sure that these rules will be followed. More importantly, with the way technology is developing today, the chances are great that there will come a time when artificial intelligence will evolve to a point when it will be self-sustaining. What then? What happens when a robot can now choose to follow the three laws? It is these very questions that have the world on edge as to whether or not to continue researching deeper into robotics.
This is when the argument of Joseph Weizenbaum comes into play. He states that should a time come when machines will replace people in some of the most menial task of the planet, that there are still some places where the human will still be needed, especially in cases where the position requires the person to understand respect, love, care and other qualities present in the human person. Such positions he includes are not limited to the nursemaid for the elderly, judges, police officers and therapist .
While this may sound agreeable, author Pamela McCorduck disagrees with Weizenbaum’s statement as she says that when it comes to such positions of morality, it would be better if the person in charge was of a neutral stance . Again, while making a good point, one must remember that morality is an individual stance and not a social one to begin with. If morality (by the basis of philosophical ethics) is “doing good and avoiding evil even if the sole purpose is survival” then what can be considered as good and bad for me may not be the same for my neighbor.
Thus, by having a machine take a moral stance, I must first be programed to differentiate between what is accepted as morally right and what is considered morally wrong by society’s standards. Hypothetically, if a robot has come to be advance to the point where it can choose what is morally right and wrong, then it no longer neutral in its stance because by then it would have chosen a pattern of ethics that it considers correct, just like what a normal human being does in the first place.
If not, then it still must be programmed with a set of guiding ethics that is considered by most as acceptable, exactly what a judge, a police officer or any other person practicing the art of delivering justice does in the time it takes to train them. By this argument, it seems that Weizenbaum is correct in the matter that despite the distance that robotics will go, the chance of robots replacing humans in every function of life is highly unlikely.
In the article written by Bill Joy for the magazine “Wired” back in the year 2000 entitled “Why the future doesn’t need us” he stated that he fears that humans are developing technology too far and fast to the point that it is endangering us as a species . His article did raise a valid point that not only he has quoted. In another article written 11 years after Joy’s, this time for “TIME”, Lev Grossman indicates that the risk of technology destroying human is only possible if we start creating technology that our culture is not ready for . This statement is also echoed in the 2012 videogame “Mass Effect 3”. While the game also covers the question about artificial intelligence, the ethics surrounding it, and whether or not organics and synthetics (the artificial intelligence within the mythos of the game) can live together (indeed the major conflict within the game is between organics and a god-like race of machines known as the Reapers), the statement in this case is taken from a different problem within the game world.
In game, a character named Mordin Solus states the reason why the Krogan’s (a race of aliens) nearly brought themselves to extinction was because the other races of the galaxy introduced to them technology that their culture was not ready for. The Krogan’s were at a stage in their culture where they were still mired in violence and without thinking it through, the Council (the ruling body of the galaxy in the game) gave them even more destructive means to wage a war .
While this may not seem like a good source literature for the study, it does have its bearing on the topic as this evolution of technology before culture can be seen throughout history as well. Simply look at what has happened to the indigenous societies where the explorers of the past tried to make the people of the land more “civil” according to their standards and you will understand what these different writers mean.
In relation to robotics, we have to be careful of how far we go in our research and take into account that the best inventions can and will most likely fail if they are released to the world at a time when the world is not ready for such knowledge .
Another important field in robotics is cybernetics; while not very far from robotics and artificial intelligence, cybernetics can be considered as the branch of science that deals with the limitations of evolution (according to Taylor Kirkland) .
In this sense, cybernetics should be studied (even if only briefly) to understand robotics because of how it has come to be known in the world today. When one hears the word “cyborg”, the first picture that comes into mind is the amalgamation of man and machine to become a whole new being that surpasses both man and robot. It is a field that should be studied as this closely relates to robotic ethics, and the reason why robotics should continuously be studied despite the possible risk of an end of the world scenario created by mankind himself.
While there are many cybernetic organisms in fiction (Robocop, DC’s Cyborg, the Deus Ex videogame series, numerous characters from the Star Wars franchise, etc.) there are also many real life cyborgs; men and women, who with the help of science and technology have been able to recover from certain deficiencies either because of accidents or because they were born with the disease itself. A good example of this case is an artist named Neil Harbisson who was colorblind before he started wearing a device called the “eyeborg” in order to “hear” colors .
The point here is the fact that, while this technology may not seem like much now, there will come a time when augmentation of the human being will change the way people live in the world. Predictions of such times can once again be seen mostly in fictional works such as the Deus Ex videogame series, the Mobile Suit Gundam Seed series, in the movie, Serenity, and in many others where the topic was about making people “better”.
When the time does come, will the world be ready for it, more importantly, when a person does go through augmentation, when must the line be drawn, when, and how? At what point does a cyborg stop becoming human and in reverse (should a time come when artificial intelligence has gone to the point where robots think for themselves) when does a robot who can feel become human?
These questions are the reason why robotics is a very dangerous field of study indeed. Because the payoff of such a study is so high, the abyss that it forms at its feet is also as deep. If robotics becomes successful to the point of practical application in everyday life, it can help solve some of the worst problems the world has ever seen starvation, climate change, global warming, manual labor, and space travel and maybe even extend life and end diseases .
But if robotics goes beyond that, if mankind creates a new sentient race that is superior to it in almost every way, and does nothing to ensure that mutual respect, acceptance and coexistence is possible, then the numerous doomsday scenarios throughout fiction may just be a few years away.
- The Unknown
While the development in robot technology has come in leaps and bounds the past years, the reason why many things are still in speculation is because we truly do not know the full capacity of the field itself . As stated many times above, one of the foreseeable results of robotics is the creation of new sentient race, one that is far more intelligent and superior to mankind in many different forms.
If we base what we predict they could do on our own actions and history, we can see that they will be quite the unpredictable lot (pardon the paradox). If a measure of control is kept on them, before they can develop into a full synthetic race, then we would know the limits of their capabilities, but we would also know that we have not put them to the test and that there are still a lot of things we could learn if we just went “one step further”.
If there is one thing about humans that is abundantly clear in our history, that would be the fact that we can never stop doing something as soon our curiosity becomes stimulated. As such, there is no doubt in my mind that no matter how long we prolong it, someone, somewhere will try to make this technology that promotes synthetic life flourish; and if so, the repercussions of that person’s actions can only be either glorious or tragic.
So should the world continue its research in robotics? In my honest opinion, yes. As I have stated above, the study of robotics is inevitable as the human being will always be curious about what is next for it out there. More importantly, it has already been started and there is no way that anyone will just leave it at that, knowing that there is so much more humanity can learn and achieve in the field of robotics; more importantly, the things that can be possible in the field of cybernetics. Imagine a world where you could live to two hundred years old, or a world where you can take pictures with your eyes, or no one will ever have the problem of being blind ever again. The possibilities are endless, and the payoffs substantial. But the risks are great as well, and to avoid such risks, we must place a set of control measures until we can fully understand what we are dealing with.
As such, we will have to do more research on the rules and ethics of robotics if we are to be sure if the world should truly carry on its research in the field. We will also have to look at cases where if such power were to be abused, what would be the possible ramifications and the possible solutions or preventers we can place. More importantly, research must be done to answer: who can we add this new way of life to the world without upsetting the current aspect of the world. Is the world ready for it? If not, then when will it be ready for it? These are but some of the questions that must be answered should the ultimate question of whether or not robotics is desirable for the future can be answered as well.
Works Cited
Asimov, Isaac. Runaround. Astounding Science Fiction, 1942. Short Story.
BBC News. Robotic age poses ethical dilemma. 7 March 2007. News Article. 5 April 2013.
Bioware. Mass Effect 3. March 2012. Videogame.
Bostrom, Nick. "Superintelligence, Answer to the 2009 EDGE question: "WHAT WILL CHANGE EVERYTHING?"." n.d.
"Definitions of Cybernetics." Larry Richard's Reader 2008.
Grossman, Lev. "2045: The Year Man Becomes Immortal." Time Magazine 10 February 2011.
Joy, Bill. "Why the future doesn't need us." Wired 2000. Article.
Ronchi, Alfredo M. "Eculture: Culture Content in the Digital Age." Springer (2009): 319. Article.
Stewart, Jon. Ready for the robot revolution? 3 October 2011. News Article. 5 April 2013.
Weizenbaum, joseph. McCorducks 2004.