What is Artificial Intelligence?
Intelligence in computing is defined as the computational ability of machines to achieve desired objectives. Artificial Intelligence is the “scientific understanding of mechanisms underlying thought and intelligent behavior and their embodiment in machines” ("AI Overview", n.d.). The concept of artificial intelligence systems is formed around information processing systems that perceive, learn, and think about what they want and how they feel. A.I. is the science and engineering involved in the design of intelligent machines including computer programs that try to learn and understand human intelligence. However, AIs do not necessarily simulate human intelligence but instead apply mechanisms that solve problems from a human perspective by observing and acquiring information and using this information to achieve the maximum level of success when faced with an issue. Scientists and AI researchers mostly employ AI systems in solving complex computing problems that sometimes require skills not present in people. Some of the tools used by intelligent agents include logic methods based on probability and economics, and search and mathematical optimization. The study of artificial intelligence is also interdisciplinary and includes other fields such as mathematics, computer science, linguistics, psychology, philosophy, and neuroscience.
The Current State of Artificial Intelligence
In the modern world, AI has made its way to the commercial market, heightening its demand in applications such as big data analytics, advertising, machine learning, and web searches. In the latest advancements in AI technology, machines are involved in ‘deep learning’, a process through which computers are able to generate relevant data by going through lots of data and computations. In 2015, Google’s intelligent bot beat the best human for the first time in image recognition while Microsoft collaborated with the China University of Technology and Science in designing an intelligent computer network that scored better on an IQ test than a college postgraduate student. These two companies and others such as Facebook and Twitter are already in the forefront in incorporating AI systems in their gadgets, for example, voice recognition software in the phone, computers, and cars (Nusca, 2016). AI systems are also being taught how to drive cars and alleviate human errors which result in accidents and many deaths every year. AI engineering applications have gone into practical uses including such utilizations such as medical imaging, financial services, advertising, energy discovery, and automobile applications.
Reasons Why AI is Not Good for People
Today, the use of narrow domain AIs is prevalent in sectors such as the stock market, health care, and driving. These systems are not perfect and are prone to glitches now and then. In financial trading, errors can cause messy financial disasters for many people and affect their lives in a profound negative way. Additionally, intelligent systems involved in driving cars can make some wrong choices and cause accidents that may kill the passengers. Bots involved in healthcare systems, for example, in the packaging of medication can provide a wrong prescription due to some system malfunction and end up putting the patient in danger. People have been quick to adopt these narrow AIs into household activities, office work, and everyday living. However, if one day these systems started failing, reverting to old means may become a problem. As a result, many people and businesses would endure periods of unprecedented losses.
Future intelligent machines will be much smarter and faster than human beings and are bound to take over the world and wipe off the human race. This is the picture portrayed by some of Hollywood’s best sci-fi movies but is a likely scenario if we don’t tame artificial intelligence technology. According to Griffiths (2015), the risk of developing super-intelligent machines could make the human population redundant. People steer the future because they are smarter and intelligent, and if the AI machines take over, there is a significant chance that people will no longer be in control and machines may decide to take over. Future machines are envisioned to possess massive amounts of computing power and processing speeds inconceivable to the human brain. Eventually, they will create global networks and communicate without human interference in an attempt to achieve Artificial General Intelligence (AGI). This type of intelligence will take over entire transport networks, national economies, financial markets, and healthcare systems globally. This could spell out disaster for human beings and realize some of the biggest fears of the century, which include being ruled by AI machines. The notion of development of autonomous weapons that target and kill without human intervention is all too risky. These weapons, for example, armed quadcopters, drones, and droid soldiers, could revolutionize the global arms race and lead to a third world war if they are sold in the black market (Murdock, 2016).
The biggest risk posed by AI systems lies in commanding them to perform particular functions. An instruction such as ‘keep humans safe and happy’ could easily imply entomb every person in concrete coffins or launch atomic weapons of mass destruction. The digital logic of intelligent machines is remorseless, and the human language is liable to misinterpretation by machines. One can issue instructions to a machine, and whereas it may be under the controls it was given, they may not be the controls that were meant to be executed. According to Sawer (2015), AIs may appear to act in ways that are beneficial to humanity, making themselves indispensable and useful, but as they get more powerful in computing and processing power, issues solvable by cognition, for example, illness, depression, boredom, and cancer, become solvable. This may translate to the vagueness of instructions and most machines may not be aware whether they are developing in a benign or deadly direction. There is also the risk that potential benefits of intelligent systems may put millions of workers out of work and out on the streets where disease and poverty are unfathomable.
Conclusion
Comprehensive plans for the development of safe AI must be developed before the first dangerous AI becomes a reality. Elon Musk, CEO of SpaceX and Tesla Motors, states that the biggest existential threat to human beings is artificial intelligence. Many other key technology players such as Bill Gates and Steve Wozniak, support these sentiments by adding that the future is scary and dreadful for people. According to him, the AI systems that we use will one day become more intelligent than and decide to get rid of slow humans to run companies more efficiently (Sawer, 2015). Professor Stephen Hawking collaborates these views by stating that the development of full artificial intelligence could mean the end of the human race. The software industry, which is worth billions of dollars is constantly evolving to develop new AI technologies every day. Since slowing down this progress is unrealistic, it is only right to invest the same effort and funds in developing new ways to make AI safe and user-friendly. One of the means to do this is to define and teach an ethical code of conduct to the machines.
References
AI Overview. AITopics. Retrieved 26 April 2016, from http://aitopics.org/topic/ai-overview
Griffiths, S. (2015). Expert claims intelligent robots could wipe out humanity. Mail Online. Retrieved 26 April 2016, from http://www.dailymail.co.uk/sciencetech/article-3143275/Artificial-intelligence-real-threat-robots-wipe-humanity-ACCIDENT-claims-expert.html
Murdock, J. (2016). AI weapons are a threat to humanity, warn Hawking, Musk and Wozniak. V3.co.uk. Retrieved 26 April 2016, from http://www.v3.co.uk/v3-uk/news/2419567/ai-weapons-are-a-threat-to-humanity-warn-hawking-musk-and-wozniak
Nusca, A. (2016). The Current State of Artificial Intelligence, According to Nvidia's CEO. Fortune. Retrieved 26 April 2016, from http://fortune.com/2016/03/22/artificial-intelligence-nvidia/
Sawer, P. (2015). Threat from Artificial Intelligence not just Hollywood fantasy. Telegraph.co.uk. Retrieved 26 April 2016, from http://www.telegraph.co.uk/news/science/science-news/11703662/Threat-from-Artificial-Intelligence-not-just-Hollywood-fantasy.html