By Eric B. Krauss on November 8, 2016

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law

  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

 

 

Asimov, Isaac. I, Robot. Greenwich, Conn: Fawcett Publications, 1950.

car GPSThe Three Laws of Robotics are not laws in the traditional sense. They are neither rules passed by a community to regulate its members nor scientific facts proven by observation to govern natural phenomena, but rather a work of literature, devised by science fiction author Isaac Asimov. These “laws” reflect common-sense maxims for human relationships: that people should not harm others; that they should obey the standards of their communities; and that people should not engage in self-destructive behavior.  The difference between humans and computers is that humans have free will, while computer behavior is governed by software and programming devised by humans.

Accidents involving Tesla’s “Autopilot” self-driving technology have been widely reported. Even after Tesla made public the results of its own investigations, the circumstances of these mishaps are not well understood. Tesla has blamed both driver inattention and the Autopilot’s inability to differentiate between certain objects as contributing to these accidents.  In response it revised the Autopilot software (now at version 8.0) to rely more on radar data and less on camera video imaging.  This software revision was issued on September 11, 2016. In the interim, another Autopilot accident occurred on August 7, in which Autopilot did not navigate a bend in the road, and the vehicle impacted a guardrail.

Whatever combination of human, hardware or software shortcomings caused them, the Tesla accidents show that Autopilot, at this stage of its development, is not yet fully effective at preventing harm to humans. For robotic car technology—including “autopilot” features and driverless cars—to be safe, adherence to the Three Laws of Robotics is a reasonable expectation. Ultimately, this technology should function effectively to protect humans from harm to the fullest extent possible.

The U.S. Department of Transportation (DOT) recently announced the Federal Automated Vehicles Policy. This policy includes a set of best practices for automated vehicle performance, and model state autonomous vehicle regulations which aim to create a “consistent, united national framework for regulation of motor vehicles with all levels of technology.” California has already issued a set of draft autonomous vehicle regulations which it has opened to extensive public comment, and other states are taking independent action as well.  Will all of this contribute to compliance with the Three Laws of Robotics?  At this stage, we do not know.

The DOT policy includes a summary of the government’s current regulatory tools to regulate the new autonomous vehicles and technology, including interpretations, exemptions, rule making and enforcement authority.   The National Highway Transportation Safety Administration (NHTSA) can recall any vehicle or equipment that it deems unsafe.   The Obama administration—and President Obama personally, according to Transportation Secretary Anthony Foxx—is upbeat, but the automobile industry is cautious. Ford and Toyota executives have complained in the press about a lack of clarity in the DOT regulations, and Tesla’s CEO Elon Musk has been relatively silent to date.

At this point automated vehicles still face a long journey. If these vehicles are to be entrusted with our safety, not only must they obey numerous rules of the road, both State and Federal.   They should also comply with Asimov’s Laws of Robotics, laid down over 60 years ago.