Human beings are quire and agile in their environment; always questioning how things came to be, work, grow or operate. It shows their relentless nature, not to accept things as they are rather than open them up and discover its truth. However, these features enabled discovery of great things such as technology, machinery, fuel and the solar system among other things. Technology has been at the center of progress for human beings, with each passing day recording a new invention. In current times, artificial intelligence continues to gain ground against other inventions such as green energy technology. Its researchers promise great importance and benefits of artificial intelligence by expressing the various ways it will help save lives and make work easier. However, the major concern is the ethical part of these inventions. What differentiates them from human beings is the ability to determine what is right and wrong irrespective of the situation. Therefore, we cannot put our trust in artificial intelligence because we are unsure of its obedience towards humans.
The development of artificial intelligence came after the creation of robots. Robots are programmed objects and machinery that replace manpower in most situations (Davis par 2). They are crafty devices, which make work easier, for instance, in the manufacturing industries and help with certain tasks such as rescue missions. Simple machines such as cars are robots, by nature, but they require the human intelligence for them to work efficiently. Hence, researchers began developing the blueprints of having robots that make the decisions by themselves. At first, the machines required the guidance of human begins to help them integrate with the environment similar to the one for training a dog. A good example is the cruise control and adaptive control systems in cars and the autopilot function in airplanes (Davis par 4).
In 2015, Google introduced its first self-driving car that uses cameras, radar detection systems, and its artificial intelligence. The car gained media coverage in a positive way, and the people marveled about its technological advancements (Davis par 5). However, there are concerns about how safe, they are among people and whether they are prone to malfunctions. If the normal machinery can malfunction and cause havoc, what is the extent of damage from artificial intelligence? In my opinion, artificial intelligence is the future that human beings will use to kill themselves. Despite the promises and assurances made by inventors on its security and constant updates, there is a fear of it overpowers its investors. Unlike natural intelligence, artificial intelligence uses a coded program to determine right and wrong, with no emotions whatsoever. Hence, human beings act out of emotion more than the rules applied to them; rules are there to guide people, but the moral standards of a person help in enacting them.
In the novel, Frankenstein, Victor Frankenstein tries to create a creature in the aim of discovering the secret of life. He collects body parts and joins them together to create the creature (Shelley 1). The novel shows a great example of how people who disobey the laws of life. His pursuit of making life leads to more problems than progress. It kills his younger brother William and his friend Henry Clerval. Apart from that, people become distant from him due to his psychic nature. They feel disgusted by him and neglect the new discovery of making man. The AI created in the novel caused more harm to its creators, and never fitted among the people. It approaches Victor to create a companion for him, but he fears it will only make the situation worse for human.
In the movie, Ex Machina, by Alex Garland. Ava, the artificial intelligence, masters the human nature and uses it to her advantage to kill Nathan, its inventor, and Caleb, its tester (Garland). The AI masters the human weakness, feelings, to have her way around things in the movie. Ava seduces Caleb to help her out of the apartment so that she could experience the outside world. However, Caleb becomes irresistible to Ava’s feelings, despite him knowing that she was a robot. In his mind, he was hoping that Ava developed feelings similar to human beings. He becomes convinced they have something going on between them, but Ava was just using him to break out. Despite the warning from Nathan, Caleb helps Ava out of the apartment after wrestling Nathan. Ava escapes without helping out Caleb, who is on the floor unconscious and trapped in the locked house.
The movie demonstrates how artificial intelligence uses the weakness of human beings to their advantage and how foolish people will be helpful towards them. It shows one disadvantage of artificial intelligence towards humans. Ava cares about herself as demonstrated in the movie and uses Caleb as her pawn. Nick Bostrom and Eliezer Yudkowsky point out that such issues are the threatening facts about AI (1). There is unanimous agreement among people that AI falls short of the capabilities registered by humans’ especially critical sense. Whenever AI researchers discover how to develop something, the capability no longer exists as an intelligence. The authors provide an example of an AI, Deep Blue, which defeated the world champion (3). Its developers no longer had command over how it played chess since it used its intelligence to make perfect scores. Hence, if such a scenario was to happen in real life, the developers will be answerable for their actions.
The argumentation in artificial intelligence continues to benefit its research, making the technology better every day, but researchers are yet to demonstrate a working system. Bench-Capon and Paul Dunne discover the failures registered between 1997 and 2007 concerning AI and how the problem of personal thoughts continues to challenge researchers (1). The argumentations fall under four factors. The first one is definitely the argument’s parts and interaction (Bench-Capon and Dunne 623). The second one is establishing protocols and rules to describe the argumentation process. The third one is distinguishing legitimate arguments from invalid. The fourth is determining conditions, which incorporate redundant discussions. Most of the research done reached the third factors since the fourth depends on the AI’s interaction with people.
Unlike robots, AI lacks a kill switch since they work around the thinking of human beings to make their moves. Hence, they discover the weak points within them and create a barrier, which the human beings cannot use against them. Therefore, they will continue to act according to how they program themselves through interaction and discovery. Bostrom and Eliezer try to create an equation of creating a safe AI, which would have an element X, which will demonstrate good behavior (7). It should note that the consequences of doing action X would not be harmful to humans. The problem with such an equation is developing it on an AI because X is not a constant and neither can one program it. Despite how careful engineers construct their machines, they cannot guarantee that it will not malfunction or sacrifice itself for the safety of others. Bostrom and Eliezer conclude that the AIs currently in production exhibit little ethical issues, though their algorithms out to have superhuman abilities, intelligence and behavior. Without these fundamental features, the future AI will be a curse to human beings.
In conclusion, the main problem in the AI research is programmed sense into the mindsets of AIs. However, it might be a challenge since it does not exist as a constant and requires interaction with people to conduct the appropriate measure. Hence, people should not put their hope on developing an effective AI since it might develop a barrier against its development.
Works Cited
Bench-Capon, T.J M. and Paul E Dunne. "Argumentation In Artificial Intelligence." Artificial Intelligence (2007): 619-641. Print.
Bostrom, Nick and Eliezer Yudkowsky. "The Ethics of Artificial Intelligence." Machine Intelligence Research Institute (2014): 1-21. Print.
Davis, Nicola. Smart robots, driverless cars work - but they bring ethical issues too. 20 October 2013. Web. 6 May 2016. <https://www.theguardian.com/technology/2013/oct/20/artificial-intelligence-impact-lives>
Ex Machina. Dir. Alex Garland. Perf. Oscar Isaac, Domhall Gleeson and Alicia Vikander. 2015. Film.
Shelley, Mary. Frankenstein. Raleigh, North Carolina: Hayes Barton Press, 1914. Print.