As robots proceed to ubiquity within societal environments, there has been a concern related to the consequences of proliferation (Sharkey, 2008). Humans are faced with the dilemma of having to choose particular options that they intend to act upon where every available option will have their moral obligation that connects to what they are related. Ethical considerations, however, in practice is to consider resolution means for such issues. There are different schemes that can be adopted for holding such moral deliberations. However, the option of the scheme in case of any, to apply is up to an individual to decide. Robots are a technologically designed system, and for them to be in a position to resolve dilemmas that relate to ethical consideration, must be built in and chosen by the designer, or these learning techniques would still be responsible for their effectiveness. There is no point, however, of losing fundamental rights that human beings possess as the room is created for more and more automations. Ethical considerations, therefore, has to be put in place.
The relatively young machine ethics community has largely emphasized in developing ethics, where every agent comes up with their ethic grounds of doing what is right and wrong in circumstances. According to Arkin and Ulam (2009), the effectiveness of moral space is shown by the application of guilt that is involved in ethical robotic software architecture for lethal military applications. There are military research projects that use mobile weapons with sophisticated sensory stems that have the ability to track and target facilities in selective use of lethal force. The ethical issue, however, that comes up in additional robot combat scenario is associated with the issue of how to discriminate between combatants and innocents (Sharkey, 2008) in a close contact encounter. A lot of serious ethical questions are posed by such system inside the military control framework. Accordingly, a conscious effort is being practiced in keeping “humans-in-the-loop” of robotic and autonomous weapon systems. However, as technology advances, the level of sophistication of such systems will increase and will become fully autonomous.
There are different societal benefits to the deployment of service robots. They prove useful in hazardous situations, and in working domestic tasks in homes. They can be used in agricultural sectors, assisting surgeons and doctors, identifying tasks and also for education and entertainment. However, the application of robotic service has its potential risks and ethical issues. Among the various applications, involve two areas of growing interest that are in personal care for children and elderly, and robots in military application. For personal care to children and the elderly, is part of a growing technology. Different companies have developed child-minding robots that help in facilitating for video games, speed and face recognition, verbal quiz games and conversations. Robotic technology in this sense is developed for assisting in the enrichment of childhood education. The robots can be controlled using mobile phones from a personal computer that allows input camera and remote talking caregivers. However, even though short-term exposure may be enjoyable and entertaining, humanly robots cannot provide care and attention that should be provided by the humans themselves. The children could be left without human contact for many hours a day and perhaps for many days, and such varying amount of social isolation can have a possible psychological impact on the children.
Moral responsibility differs from the legal responsibility. However, legal reasonability forms a perfect basis for consideration of many issues related to robot ethics. There are no universal acceptances that relate to moral theory, and there are only a few general acceptances that relate to moral norms, and while there is a difference in legal interpretations of both cases, and differing legal grounds, the legal system ultimately tends to settle the question of responsibility appropriately. Considering the establishment of policies for fair distributions and building them into robots, one can simply look at a robot structure that follows certain policies as being essentially adaptable and able to enforce policies in institutions, and can also seek ways to challenge them. Establishment of policies, therefore, is a way of isolating individuals from the moral responsibility of taking particular decisions.
In conclusion, Development of robots for application in different sectors of the society comes with socio-economic importance. However, ethical issues that arise when human actions, robot and interrelations that connect humans and robots are addressed then considerations have to be put in position. Robot designers, manufacturers, and users of the socio-technical system should ensure consideration of the framework and questions should be asked before implementations follow. This takes into account a balanced approach to the risk posed by automated robots and the benefits they can bring. However, in most circumstances as it has been evident in the past few years, the results that accompany application of robotic creatures in different fields have been of much doubt. Therefore, implementation of such has a course of significant consideration that relates to the use of robots. While many might consider perhaps the beneficial effects, robots clearly have a deleterious effect that should not be left unchecked which is why there is a need for incorporating models of morality.
Reference
Sharkey, N. (2008). The ethical frontiers of robotics. Science, 322(5909), 1800-1801.
Arkin, R. C., & Ulam, P. D. (2009). An ethical adaptor: Behavioral modification derived from moral emotions.