Introduction
Artificial (computer) intelligence is one of the most promising development vectors in computer science and computer engineering. Work in the field of artificial intelligence is aimed at the development of methods, tools and technologies of designing computer systems with a specific function such as training, expert advising, robotics, and others. Unlike conventional programmers, who are involved in the development of well-specified software, artificial intelligence experts are additionally able to formulate specifications, which is one of the biggest challenges in the design of any product. However, the issue remains, as questions continue to arise as to whether human-equivalent artificial intelligence implies free will.
At the dawn of artificial intelligence theory, scholars sought to create a model of the human brain. Scientists wanted to create a model that would possess all the human traits. Currently, scientists developed several approaches to the creation of artificial intelligence, the most promising of which are:
creating a computer model of the human brain;
creation of the program, which will be capable of learning and self-learning.
However, both approaches meet significant obstacles. At the same time, despite the large leap in the development of computer technology, the main obstacle is precisely the current shortcomings of these technologies. It would seem that this statement is completely absurd. After all, why are modern computers capable of simulating weather, do the most complex calculations, solve the most complex mathematical problems, analyze the financial situation in the world, however, at the same time it is incapable of performing simple actions man can easily accomplish. Artificial intelligence cannot understand the meaning of the text of children's books, to assess human action. The reason is that the ordinary actions for the person are the result of complex interactions of neurons in the brain. In addition, until there will be developed a computer (IBM is planning to accomplish this by 2019), which would be able to handle a model of the human brain, which includes more than 86 billion neurons. By comparison, today scientists have only managed to create a working model of the brain, which includes only a small portion of the neurons. This large number of neurons in the human brain allow people to classify the actions according to the need. Thus, if there is no need to perform a certain action humans and show their undesire to accomplish it thus demonstrating a freedom of will. Machines re unable to do this,
All the current approaches to artificial intelligence assume that in the future artificial intelligence will get a capacity for constant self-improvement (Hawkins & Blakeslee, 2004). This means that scholars are striving to create an artificial system with the intelligence of, for example, a two-year old child, which would gradually learn something new from its mistakes. It is planned that the system will further develop into a teenager, then from a teenager to an adult (Freedman, 1995). Taking into account the ever-increasing processing power of computers, one can say that the process of this "growing up" may take not more than a decade, but only a couple of years or even a few months. Given the fact that artificial intelligence can exist almost forever, then, to continue its education, it can reach unprecedented heights in any field of science (Hawkins & Blakeslee, 2004). However, the development of such intelligence, presupposes that the machine will be alike to the human brain, fully “equipped” with a “conscience” that is only inherent to humans. This is ridiculed, however scholars, Dehaene for example, have concluded that the human conscience is “simply” the exchange of information in the brain. Following this claim scientists are attempting to prove that the machine can have an implemented brain, with a pre-programmed algorithm for deciding on whether its actions can be called right or wrong (Freedman, 1995). Upon hearing about specific algorithms, it is already difficult to conclude that the machine with an implemented artificial intelligence can have the freedom of will (Edwards, 2013). Nonetheless, scholars attempt to prove humanity wrong. Will this have adverse consequences for humanity is yet unknown.
However, if one is to delve deeper into the problem of feeling secure in the even to fusing AI, there is no real threat. The only problem one can think of is connected to the morals and goals of those who created AI, and especially those who have specifically ordered it. AI is a set of actions, so-called algorithms. Even in the event of creating a self-learning AI, it remains an algorithm of specific actions. It might learn to read, write, build, shoot, re-create itself, and actually everything that can be learned. At the same time, the need to use any skill is also determined by the capital code. If there is no need, AI will not use the specifically learned skill, or say a coined phrase as a response to external conditions. What will be next then? Nothing. It will simply be able to accomplish any task with the help of the resources provided for it. It will have the capacity, but not be able to accomplish the command itself without a specific condition for it. Tales of AI turning rogue or rebelling against people, that it will spread across humanity as a virus and decides to rid our earth suffering from the virus in spite of all the higher powers, will remain simple stories for the future blockbusters. Machines will not have the free will to take any action by themselves, they will have a certain algorithm that will determine a specific condition (Edwards, 2013).
Analysis
Why is the creation of AI, or rather the sole concept of AI is not a threat to humanity. AI was crated by man. Man has studied himself, so as to understand some of the features, not understanding everything to the very end, and is attempting to create something similar to a mental activity with increased power and speed of reading, analysis, computation and emulation (Muller, 2016). At the same time AI as a creation has a specific goal which the creator is aiming to achieve (Nadeau, 1991). Otherwise, what is the point?
It is important to understand what is man from a psychological perspective? Man is a set of action algorithms on various conditions, which as a basic minimum recorded at birth (genetic information), and supplemented by, filled with new algorithms (units) in the process of life (social, and as its types egregorial, as well as intuitive information) (Muller, 2016). However, what is the most important thing inherent in man only? The most important thing is will and consciousness, as a component of the soul. It is better to leave the dispute on the subject of religion – everyone is aware of the presence of such a notion as will. The will for this or that action. It manifests itself when a certain stimuli driving force to certain actions or inaction appears (Kaku, 2014). This then sends signals to the body to make man act: eat, sleep, survive, multiply, etc. This is also the social attitude: culture, morality, purpose of society, the moral influence of all possible together, attitude, outlook and worldview formed in a certain form for specific reasons, etc. This is also about obtaining information from the upper levels of the Universe: intuition, insight, revelations, etc. (Roth, 2013). All this in close, complementary and of interpenetrating totality determines the motivation to a specific action (Kaku, 2014). It is important to note that suppression of the will of man, as well as targeted disruption of man’s psyche, including through the introduction of contradictions, and with the help of alco-drug substances, as well as deliberate deception and suggestion, targeted pressure on the emotions and instincts in the absence of proper resistance turns man into an obedient software bio-module (bio-robot) (Kasaki et al., 2016). Almost the same thing is happening in the field of AI, with the exception that none of it will work.
As to AI it will never be able to have its own will? How will it be able to determine what are the goals? No internal stimuli will appear, thus they can be programmed only from the outside (Nadeau, 1991). Of this is capable only the capital code determining the "condition-action". A real AI will have the action code algorithm in the following form "goal-condition-action-result-test for compliance with the (feedback)-correction according to the goals and new condition-action-and so on." AI will also not be able to demonstrate any emotion. They can only be uploaded into the system from an external source and only in the form of a “condition-action” code. However, this will only be a specific name of an emotion designated to be started under certain conditions (Rao, 2013).
In the event there will be a specific conceptual goal, encoded as top-priority in the code "goal-condition-action-result-test for compliance with the (feedback)-correction according to the goals and new condition-action-and so on," then no matter what the conditions are input they will remain contrary to the main goal, and thus be simply ignored and erased without saving (Rao, 2013). For example: "help humanity in specific cases-murder is forbidden-someone attacks-knock weapon out-attack-imprison-allow people to hold trial-escort to jail-change to patrol mode, etc.".
Some scholars claim that there is no need to impose any restrictions on the code of artificial intelligence, to prevent its negative attitude to the creators (Rao, 2013). It is only necessary to make the process of becoming a person of artificial intelligence took place in a friendly environment that would have instilled in him the moral qualities, which would not allow artificial intelligence harm their creators (Roth, 2013). However, in today's world, this approach sounds like utopia. The human morality system is imperfect. For humanity to be the role model for artificial intelligence, it must first correct its own mistakes and problems, to inculcate the moral values, which then the robots are to be must programmed with. However, there are no prerequisites for this. Humanity has witnessed thousands of years of aggression and war, the genocide of the individual people. Before humanity appear new global challenges, to solve which or to prevent humanity is incapable of right now.
We are surrounded by a whole array of devices, in one degree or another with artificial intelligence. An air-conditioner knows when it needs to turn ON or OFF to maintain the desired temperature, based on the rule-driven-system. The Personal Assistant on the smartphone, such as Siri in iPhone, is able to process natural language and not only able to perform standard functions such as sending messages, but also to learn the user's preferences over a long time, in order to provide better recommendations – this is the data-driven-approach.
Generally speaking, we do not even notice how robots are no longer science fiction - a robot cleaner, robot mower, the robotic surgical system confidently replaces the person especially during delicate operations, such as eyes. There are even robots therapists (Kaku, 2014). Their task is to determine, when a person talks, signs of depression, manifests any suicidal tendencies, to calm the person down and advise. And many people do not understand that they are communicating not with the doctor bit with the computer.
Although at first glance it seems that the most difficult step is data processing, it is not always the case, especially if it is a data-driven approach. We add these conditions and choose several options with the appropriate calculations, and get an AI that is full and almost independent from the service of the people (Fillard, 2016). This is proof that machines have no free will, but a specific set of rules they follow and actions that are performed corresponding to the specific algorithm the AI is pre-programmed with. Nonetheless, to make AI a close to humans as one possibly get, we just need to give it the opportunity to "argue" with similar AIs, with the possibility of replacing and updating the list of actions on various conditions, including using the copy function from each other. Moreover, even in such an event, all AI’s are programmed and no true development will take place (Fillard, 2016).
The more ignorant humans are the more freedom of will we have. This statement is interesting from several perspectives. For one this claim is a follow-up to the statement that machines are more complex and can perform actions that require more “intellect”. Well, AI is more complex, however, it is so because humans have made it to fit a certain purpose with a certain code. From another point of view AI in the body of a “robot” has no freedom of will to change its occupation or code, unlike humans, who can easily transfer from one sphere of work into another. Freedom of will is the desire or undesire to perform a specific action, whereas a machine cannot choose to break down and not, it is forced to accomplish the task set by the code a human has uploaded into it. Thus, the AI is only able to imitate instead of create something on its own.
Reference
Edwards, J. (2013). Freedom of the will. Place of publication not identified: Digireads Com.
Fillard, J. (2016). Brain vs computer : the challenge of the century. Hackensack? New Jersey: World Scientific.
Freedman, D. (1995). Brainmakers: how scientists are moving beyond computers to create a rival to the human brain. New York: Touchstone.
Hawkins, J. & Blakeslee, S. (2004). On intelligence. New York: Times Books.
Kaku, M. (2014). The future of the mind: the scientific quest to understand, enhance, and empower the mind. New York: Doubleday.
Kasaki, M., Ishiguro, H., Asada, M., Osaka, M. & Fujikado, T. (2016). Cognitive neuroscience robotics. Japan: Springer.
Muller, V. (2016). Risks of artificial intelligence. Boca Raton, FL: CRC Press.
Rao, R. (2013). Brain-computer interfacing: an introduction. New York: Cambridge University Press.
Roth, G. (2013). The long evolution of brains and minds. Dordrecht New York: Springer.