Asimov’s Laws of Robotics
Since discussions of robots and artificial intelligence began, humans began to wonder if the idea of AI was even safe. Would robots harm us? Would it be okay to allow a robot to function without a human controlling it at all times? And even if we were to allow it, how would we know for sure? Asimov’s three laws of robotics suggest if we take proper precautions, it may be possible to allow artificial intelligence to exist in today’s world. However, Asimov’s laws state a robot will never harm a human, which would make a robot’s primary purpose, working in law enforcement, superfluous. Therefore, robots may be able to adhere to Asimov’s laws, but only if they are not expected to uphold them while working under contract for the military or law enforcement, making it impossible to agree with both statements.
Asimov’s three laws of robotics state a robot cannot injure a person while interacting with them or willingly let them be harmed, a robot cannot obey an order that is given by a human if it conflicts with the first law, i.e. if one human demands the human commands a robot to harm another human, and a robot is finally allowed to protect its existence, but only if protecting its existence does not interfere with the first two laws. Therefore, there is no possible way a human could come to harm under Asimov’s laws. However, if a robot were to be used by law enforcement or the military, this would likely be a part of their job description and it would fall in conflict with their occupational duties.
An example of this would be if an army of robots were created specifically for use by the military. While they could be used for a variety of reasons, eventually they would likely be called upon to cause harm to other humans. Though some may be used to scout enemy territory before humans did themselves, and others may be used to diffuse bombs, eventually a robot would be used to kill an enemy in order to prevent human forces from being killed in action. While this may immediately be seen as a positive thing by the military, as well as civilians at home, it is in direct violation with Asimov’s three laws of robotics. The robot will have harmed a person while interacting with them, and also will have done so at the command of another human. Other scenarios that have already happened have proved equally fruitful, but still prove that Asimov’s laws, as well as the idea of using robots for the gain of law enforcement are in conflict with one another. In the recent protests in Dallas, a skilled marksman took up arms on a rooftop and aimed at police on the ground. He managed to shoot eleven cops, kills five of them. Wanting to avoid any further bloodshed, a member of the Dallas Police Department used a robot to locate and terminate the Dallas shooter. While many were in favor of this, and it was widely regarded as an action that avoided needless lives being lost, a robot was still used to kill a human. In so doing, Asimov’s laws were defied and the use of robots in law enforcement was exemplified as something that can never adhere to Asimov’s laws.
Robots are developed for specific purposes based on their job. A robot that is technically a vacuum cleaner does not possess artificial intelligence and, therefore, does not have the capacity to harm a human or take orders to harm a human. It may run over your toe and hurt you, but this is not the same as a robot that harms you during an interaction, shoots you on command, your breaks your arm in order to save its own life. The robot used to kill the Dallas shooter is a prime example of a robot that followed orders. It was controlled by a human who wanted another human found and dead. According to Asimov’s laws, this should not be allowed to happen, but in the context of the military and law enforcement, it will often be a part of the job, as robots will likely be considered an expendable resource compared to humans and will be sent into dangerous zones to take on enemy combatants as we have seen. Dallas was an example of what happens in our own territory. If we can extrapolate that to what will happen in enemy territory, that human lives lost will be great. It is equally likely nobody will attempt to adhere to Asimov’s laws because the lives being lost will be enemy lives, but this does not account for the development of artificial intelligence, and eventually possibility of self-awareness. The very fact that a robot could be capable of harming a human makes Asimov’s laws superfluous and impossible to agree with. They are merely words that lull us into a false sense of security.
In sum, while Asimov’s laws are just and ensure that no robot can harm a human, if a robot is forced to, they will. Therefore, if a robot is working in the military or for law enforcement, the probability that they will be in a situation as we saw in Dallas, wherein they take a human life will be significantly increased. Asimov’s laws will be problematic, if not completely impossible to adhere to in these situations, making the two ideas impossible to agree with. Asimov’s laws would only have been efficient if robots had never been devised as tools for militant purposes. Because they have, the two ideas conflict and we cannot agree with them as a combination. We must choose one or the other and given recent events at home, as well as casualties abroad, it appears the people have spoken: Asimov’s laws will only apply to robots outside of militant jurisdiction. All other robotic forces will be at the whim of the military, and free to harm humans.