Robot ethics

Article 1 : Scientists develop official guidance on robot ethics

By  on September 20, 2016

It was decades ago when science fiction great Isaac Asimov imagined a world in which robots were commonplace. This was long before even the most rudimentary artificial intelligence existed, so Asimov created a basic framework for robot behavior called the Three Laws of Robotics. These rules ensure that robots will serve humanity and not the other way around. Now the British Standards Institute (BSI) has issued its own version of the Three Laws. It’s much longer and not quite as snappy, though. 
In Asimov’s version, the Three Laws are designed to ensure humans come before robots. Just for reference: In abbreviated form, Asimov’s laws require robots to preserve human life, obey orders given by humans, and protect their own existence. There are, of course, times when those rules clash. When that happens, the first law is always held in highest regard. 
The BSI document was presented at the recent Social Robotics and AI conference in Oxford as an approach to embedding ethical risk assessment in robots. As you can imagine, the document is more complicated than Asimov’s laws written into the fictional positronic brain. It does work from a similar premise, though. “Robots should not be designed solely or primarily to kill or harm humans,” the document reads. It also stresses that humans are responsible for the actions of robots, and in any instance where a robot has not acted ethically, it should be possible to find out which human was responsible.
ED-209, a killer robot from Robocop
According to the BSI, the best way to make sure people are accountable for what their robots do is to make sure AI design is transparent. That might be a lot harder than it sounds, though. Even if the code governing robots is freely accessible, that doesn’t guarantee we can ever know why they do what they do. 
In the case of neural networks, the outputs and decisions are the product of deep learning. There’s nothing in the network you can point to that governs a certain outcome like you can with programmatic code. If a deep learning AI used in law enforcement started displaying racist behavior, it might not be a easy to figure out why. You’d just have to retrain it. 
Going beyond the design of AI, the BSI report speculates on larger ideas like forming emotional bonds with robots. Is it okay to love a robot? There’s no good answer to that one, but it’s definitely going to be an issue we face. And what should happen if we become too dependent on AI? The BSI urges AI designers not to cut humans out altogether. If we come to rely on AI to get a job done, we might not notice when its behavior or priorities start delivering sub-optimal results — or when it starts stockpiling weapons to exterminate humanity.

Article 2 : Robots ethics 

By IEEE Robotics and Automation Society


Robot ethics is a growing interdisciplinary research effort roughly situated in the intersection of applied ethics and robotics with the aim of understanding the ethical implications and consequences of robotic technology, in particular, autonomous robots. Researchers, theorists, and scholars from areas as diverse as robotics, computer science, psychology, law, philosophy, and others are approaching the pressing ethical questions about developing and deploying robotic technology in societies. Many areas of robotics are impacted, especially those where robots interact with humans, ranging from elder care and medical robotics, to robots for various search and rescue missions including military robots, to all kinds of service and entertainment robots. While military robots were initially a main focus of the discussion (e.g., whether and when autonomous robots should be allowed to use lethal force, whether they should be allowed to make those decisions autonomously, etc.), in recent years the impact of other types of robots, in particular, social robots has become an increasingly important topic as well.


OBJECTIVES :
The Technical Committee on Robot Ethics aims to provide the IEEE-RAS with a framework for raising and addressing the urgent ethical questions prompted by and associated with robotics research and technology. Ever since its inception almost a decade ago in 2004, the TC (in its third generation now) has been involved in organizing various types of meetings (from satellite workshops at main conference, to standalone venues) to call attention to the increasingly urgent ethical issues raised by the rapidly advancing robotics technology. For example, an increasing number of workshops and special sessions was organized recently at main conferences (such as ICRA, IACAP, AISB and others). And more workshops, special sessions, and standalone venues are in the planning. Moreover, an increasing number of publications as well as public lectures and interviews by former and current TC co-chairs and other researchers invested in this topic focus on increasing the awareness of researchers and non-researchers alike about the urgent need to understand the social impact and ethical implications of robot technology. In addition to organizing special sessions and workshops at major international venues on robot ethics, the TC continues to raise public awareness and aims to organize a standalone international event in the near future.

Resume :
Technology leads us towards horizons that only science fiction movies can imagine. One of the most important issues when we talk about technology is that of human robots. We may think that robots will be part of our daily newspapers in several decades. However we already ask questions about their existence and what they represent in society. It is the ethics question. The most important goal is to succeed in creating robots but to be able to keep control. Asimov's laws justly used to it. 
A robot must remain under the control of man and not the reverse. Also, they should not replace men in their profession because their progession will be too large and would have too important roles in society. We have to limit their access to the profession because they can never think like humans, they are not independent and can not take important decisions. That is why the question of ethics arises. Having a robot to assist in everyday tasks is perhaps a good thing but robots should not take initiative for us.

Commentaires

Posts les plus consultés de ce blog

Connected bracelets

Wearable Tech