The ASIMO’s concept of a chair is not an instance of Searle’s original intentionality. In this context, intentionality can be translated roughly to mean representation. According to Searle, it is not right to say that a machine can think by virtue of running programs. in such instances, the intentionality that is characteristic of these programs comes from the original intention of the interrogator and the programmer. These two people are outside the machine. Thus, the machine is not making judgments based on thoughts. Rather, it is just giving specific answers to the specific questions based on the program running it. This is similar to a situation in which a machine is taught to give answer “a” each time it is asked question “1” (Calderone, 2016).
The ASIMO’s concept of a chair is different from this since it ushers in a new dawn of genuine artificial intelligence that is not based on programming alone. The robot can actually remember faces and other objects and even make judgments on objects that it has never seen before. For instance, in the video, three different types of chairs were brought for it to identify. Despite the fact that it had never seen these objects, the robot was able to process information about its physical features and make a judgement of what these objects really were (Calderone, 2016). This means, the feedback was not influenced by the original intention of the interrogator or the programmers.
Original intentionality is not sufficient for consciousness. This is because consciousness only assesses the capability of a small fraction of the brain to process a wide range of information. Any device which carries some form of information exhibits some degree of intentionality. For instance, a compass carries some information about the direction just like the mercury level in a thermometer carries some information about the temperature. The robots carry information fed into their system by programmers. However, they somewhat learn to integrate these information and produce different outputs in different scenarios which do not reflect the original intention of the programmer or the interrogator. This shows that their consciousness does not depend on original intentionality (Jacob, 2006).
Works Cited
Calderone, G. (2016). Can a Machine "Think"?. Learning.hccs.edu. Retrieved 11 May 2016, from http://learning.hccs.edu/faculty/gina.calderone/phil1301-7/reading-guides/can-a-machine-think/
Jacob, P. (2006). Intentionality. Plato.stanford.edu. Retrieved 11 May 2016, from http://plato.stanford.edu/entries/intentionality/