Six rules ideal robot

 
November 20, 2008, 11:18 am

   Technological advances over the past decade led to a series of smart machines, which over time only increases their intellectual ability. At the same time going on and expanding the skills of robots - they learn to drive, help in caring for the elderly and the sick, and even learn a variety of military specialties:explorers, engineers, infantry, etc. In this regard, it is imperative to integrate an increasingly capable robot in human society, and to do so, according to Uoloka Vendela, a specialist in ethics at Yale University, and Colin Alan, historian and philosopher, University of Indiana, the realization of the six rules to reduce the danger emanating from umneyuschih robots.
Rule one:do not trust robots tasks that involve a significant risk to society. In other words, the actions of robots should be predictable, and the possibility of accidents is reduced to a minimum. But now the behavior of computer systems is not always possible to predict, and in the future, as more complex intellectual abilities of robots and computers, their autonomy will only increase.
Rule two:Under no circumstances to trust robots weapons, and/or provide them with the management of arms. Unfortunately, to prevent the emergence of armed robots can not be - have already been developed and delivered by the semi-robotic installation in the sky raised unmanned aerial vehicles, and several pieces of military robots have been sent to Iraq, albeit for their abilities.
Rule three:respect for the three laws of robotics Asimov. Famous laws of Isaac Asimov must ensure the safe coexistence of robots in human society, to avoid accidents and abuse of robots their capabilities. But pitfalls is the fact that Azimov specialized in science fiction and had no experience in robotostroenii, meaning that its laws no scientific basis. Moreover, the author of his works describes a situation where the rules made them stop working in the right direction.
Fourth Rule:programming robots with a range of conditions and principles. One of these principles should be the principle of maximum utility of action - that is, choose from a range of possible actions, only those that will benefit as many people as possible. On the other hand, following this principle, the robot can be to sacrifice the life of one person to save the lives of five. Today, no doctor is not capable by the killing of one person to save the lives and health of other patients, the realization of such high moral character - one of the main problems that arise before the robot in the future.
Rule fifth:continuous learning robots. This will enable intelligent machines with flexibility to change their behavior over time, analyze their own actions, choosing among them the right and wrong. However, achieving this goal is impossible without a number of technological breakthroughs, because the level of development of modern science can not create a truly thinking and analyzing robots.
the six:giving robots core set of emotions. Such functionality is essential for the integration of robots in human society - machines should easily recognize human emotions, from which to choose its own strategy of behavior. It is not a secret that much of the information necessary for effective communication with a person like, is transmitted through facial expressions, language - all these should be easily seen and robots. Although this task is very difficult to implement, is already showing progress in creating robots that can recognize emotions. In other words, the task is not impossible.

 

robotsrulerobothumansocietyrulesotheremotionsactionslaws
• which is ideal robot
Random related topic`s:
Japanese awards the best robots in 2008 (robots)RoBe:Do - home robot designer based netbuka ( robot)Virtual objects can now be touched by ultrasound ( human)Science Fiction help in the development of robots ( society)Showing fashionable new products HP autumn season 2008 ( other)Palm is not afraid of lawsuits from Apple ( laws)
technologyfrequencygraphicsdevicesgeforcecompany