One of the most complicated machines in the world is human brain. Human beings have often pondered about its complexity and the beauty of its functioning. Man wanted to create an object that can think as similar as man. An object which can replicate the functioning of the human brain and give out the desired results. This innocuous desire of man has been the stepping stone for inventing a branch of study- Artificial Intelligence.
The name as it seems, is not a very new concept. It has been in the minds of people ever since they began thinking. The evidence for this fact is that, mechanized life forms have always featured in mythologies across almost all civilizations. Haphaestus in Greek mythology and Hero of Alexandria are examples of this fact. Later man’s intention of having a mechanized man crept into novels. The fictions of the nineteenth century started featuring an intelligent device that can think and work like a man.
Apart from the fascination to have mechanized robots, the philosophers across the world have independently tried to figure out how our brain works and come up with the structure of human thinking. In the 17th century mathematicians like Leibinitz tried to standardize rational thought. He wanted rational thought to be a mathematical entity which can be worked upon by solving it. As a result, there would a single mathematical reasoning language throughout. So, in case of any logical dispute, mathematical calculations would simplify them and solve them. Developing a relationship between mathematics and logical thinking in 19th century gave Artificial Intelligence a new lease of life. In the 20th century, the works of various mathematicians like Boole and Frege aided in the development of artificial intelligence. Though, the topic has been in the minds of philosophers and mathematicians for centuries, the actual revolution in this field took place only in the 20th century.
EVOLUTION:
1940-1956:
It was a primitive stage of robotics. The field robotics found mention in fictions and not in scientific journals. It was this period that set goals to achieve and some baby steps were taken in that direction.
The present day robots use digital signals. Those days, the idea of digital systems was hardly known. So, the machines that were made those days were made using analog circuits. W. Grey Walter designed a machine called ’turtles’, which works on analog circuitry. Later, researchers found out that brain is a complex electrical circuit, made up of neurons that send all or no signals. This is was the basic reason behind the evolution of digital theory- comprising of 0’s and 1’s. Norbert Wiener wrote a journal on Cybernetics. It was a breakthrough journal on control of electrical networks. It was during this era that Alan Turing’s work ‘Theory of Computation’ consolidated the fact that it is able to perform complex calculations in a digital way. Information theory by Claude Shannon was an attempt to explore more of digital electronics. In 1950, Alan Turing came up with a paper that described about a test to determine if the machine was ‘thinking’. He asserted that if a machine can have a conversation, indistinguishable from that of a human, then it can be said to be ‘thinking’. This is considered to the first serious paper on Artificial Intelligence. It was during this era, that computers came into existence. This gave an impetus to the further processing that would take to replicate human thinking. It was also during this era that codes were developed to program the thinking ability to play games like chess. Hebert A. Simon along with Allen Newell created a program to prove 38 of 52 theorems in Principia Mathematicia, a masterpiece book on mathematics published in 1913. It was during this era that, the primary models of neural network was designed. The scientists analyzed the brain as a circuit which has to control electric signals.
1956-1970:
It was during 1956, at a conference in Dartmouth College that a branch of study called Artificial Intelligence came into existence. It was a glorious period for this branch of study. It was during this period, the technology went under a transformation. The primitive vacuum tubes were replaced with transistors and later integrated chips. This gave a boost to Artificial Intelligence. The researchers in this era did a trial or error approach in their programming. They made the device step by step in its thinking procedure and if at all it reached a dead end, they traced back the previous path. But the problem was that, the number of combinations to get to the right answer was very high. Later, they defined certain rules to avoid labor. They formulated certain rules to determine if the program had scope beyond a particular step. By this, machines would not take paths that would give a solution. These features were captured in a program called ‘General Problem Solver’. Geometry Theorem Prover and SAINT were some impressive machines which could solve the algebraic and geometric problems. Stanford University developed a robot called Shakey. Later machines were developed so as to communicate in a natural language of humans. ELIZA was the world’s first chatterbot, though it just gave away pre stored information. The organizations working in the area of artificial intelligence received a lot of financial grant from ARPA(Advanced Research and Projects Agency). In 1965, Moore asserted that the overall processor speed will double every two years. This was a breakthrough in the field of AI, which was in need of high processing speed of devices.
1970-1980:
This was a drought period for AI. The researchers could not progress the way they thought. The machines could solve problems in algebra and geometry but it lacked basic functions of commonsense like recognizing objects. This is called ‘Moravec’s Paradox’. The scientists argued that there were many complicated programs which would take infinite time to solve them, given the computational speed of machines. They also believed that machines can never match human intelligence. Also the expectations from these devices were very high, that they lost interest in the field. The funding of these projects also dropped as there was no progress being made.
Revival: 1980-1987:
In this era, there was a rise of ‘expert systems’. These systems were expert in a particular domain. This eliminated the ‘common sense’ problem of the previous era. It was also during this era, Cyc, a machine with a huge database was created. This was an attempt to solve the defects pointed out in the previous era. The connectionism concept found its use long after 1970. In 1982, John Hopfield demonstrated the different ways of learning the machine can be subjected to. Japan took initiative to fund its fifth generation program. The other countries also followed suit. DARPA also gained interest again. An expert system called XCON deployed at Digital Equipment Corporation saved a lot of money for the company. In 1986, it was able to save about 40million dollars for the organization.
1987-1995:
Once again, the field of artificial intelligence went through the rough patch. With computers becoming powerful than the expert systems, the demand for expert systems decreased. Japan’s fifth generation goals, charted down in 1981 were not even met in 1991. DARPA found that it would take a long time for this field to develop. So, it decided to concentrate on goals which would give quick results. So, once again the funding for projects reduced. It was also during this time span that the scientists believed that the artificial machines need body and sensors to respond to the environment.
1995-Present :
On 11 May 1997, a super computer named Deep Blue defeated chess champion Gary Kasparov in a match. This was a great achievement in the field of AI. There have been various robots bettering human actions. In 2011, Jeopardy! quiz, Watson, an IBM robot defeated two of the greatest quiz players ever. This is a clear indication that AI has considerably grown over the ages.
A breakthrough achievement underway is the Google Driverless Car. It is an electric car which has no driver. It is said to be the car of future. The project is being headed by Sebastian Thrun. It was his team that won the DARPA challenge by making a driverless car that travelled a stretch of 133 miles. The car which has been developed is being tested with a veteran driver at the driver seat and an engineer at the passenger seat. The driverless car has been accepted in some states of United States. The car has a range finder mounted to its top, which helps it find its way. The Velodyne, 64 –beam laser beam fitted to the top helps to construct a 3-D view of the world. The Google car takes care of each and every detail like the traffic lights. It has a high precision device in place to steer through the heavy traffic of the city. There are 10 cars in fleet-6 Toyota Prius, 1 Audi and 3 Lexus RX450h. It has its shortcomings as well. It travels at a very slow speed near crossings in order to be very cautious. It cannot work in snow covered roads. The other limitation is that it cannot recognize potholes on road or policeman swerving to stop the car. The car is expected to be available in the market for commercial use by 2017.
This field of artificial intelligence has come a long way from what it once was. The evolution from computing stage to a stage, where it can drive itself, is a huge journey. But many of the dreams conceived by our scientists long ago are yet to come true. This is an indication of the fact that there is a long road ahead for this field.
REFERENCES
- Lenat, Douglas; Guha, R. V. (1989), Building Large Knowledge-Based Systems, Addison-Wesley, ISBN 0-201-51752-3, OCLC 19981533
- Lighthill, Professor Sir James (1973), "Artificial Intelligence: A General Survey", Artificial Intelligence: a paper symposium, Science Research Council
- Maker, Meg Houston (2006), AI@50: AI Past, Present, Future, Dartmouth College, retrieved 16 October 2008
- Silicon Valley vs. Detroit: The Battle For The Car Of The Future May 27, 2013 issue of Forbes magazine
- Turing, Alan (October 1950), "Computing Machinery and Intelligence", Mind LIX (236): 433–460, doi:10.1093/mind/LIX.236.433, ISSN 0026-4423, retrieved 2008-08-18.
- "Moral Machines". The New Yorker. November 27, 2012. Retrieved August 24, 2013.